Open Access
16 June 2022 Distributed competitive satellite optical burst switching mechanism for satellite networks with ultra-high link delays
Zhe Zhao, Zhiliang Qiu, Weitao Pan, Hanwen Sun, Ling Zheng
Author Affiliations +
Abstract

The ultra-high link latency of the conventional central reservation switching method in satellite optical networks limits link utilization. In this study, a novel fully distributed competitive satellite optical burst switching (DC-SOBS) mechanism was proposed to overcome the drawbacks of the optical circuit switching method and achieve link multiplexing. Furthermore, the proposed method addressed the low link utilization problem caused by the high link latency of the central-control-related methods. The DC-SOBS mechanism in the ring network of six Geostationary Orbit (GEO) satellites improved link utilization by three orders of magnitude compared with the reservation method. A data conflict processing method was proposed to ensure that the same information was provided to various optical switching nodes, and the same algorithm was used to achieve a consistent scheduling result in each node.

1.

Introduction

With the rapid development of satellite internet, the demand for satellite link transmission and switching capacity is expected to increase sharply in the future. Typically, the microwave communication load of conventional satellites is limited by spectrum resources, bandwidth, and power consumption.1 In contrast, laser links are not limited by spectrum resources,2 and laser communication has a faster transmission rate, larger bandwidth, and strong anti-electromagnetic interference capability.3 Furthermore, an intersatellite link communication capacity as high as 10  Gb/s can be obtained in one wavelength4 using laser links, and the link capacity of the ETS-9 Satellite Communications Project from Japan should achieve 40 to 100  Gb/s.5 Because of the limited power of satellite platforms, the Chinese CAST2000 platform’s peak power consumption is 900  W.6 Ultra-high-speed switching processing in the electrical domain using the conventional optic-electro-optic switching method consumes considerable power: one typical 100-Gb/s coherent transponder alone consumes 100 W.7 This power consumption is unsuitable for implementation on the satellite platform. Additionally, high bandwidth accompanies great processing pressure in the switching system, and the energy consumption of the electrical packet switching may exceed the maximum energy payload of the satellite.5 In Ref. 8, ultracompact and low-power-consumption optical switches were desired, and a switching power of only 0.15 mW was obtained.

In data center networks, many optical circuit switching (OCS)-, optical burst switching (OBS)-, and optical packet switching (OPS)-related methods have been proposed to improve network capacity. In the method proposed in Refs. 910.11.12, the data are sent after obtaining link resources because of limited resources and feedback at each switching node of the whole transmission link. In Ref. 13, a burst chain switching mechanism was proposed to improve link utilization and avoid conflict. However, in this method, link reservation is required, which is not suitable for long-delay satellite links. Yan et al.14 proposed a method to implement OPS in a data center network. However, the method requires additional buffers and fiber delay lines, which is not suitable for engineering implementation on satellites. Tode et al.15 proposed an OCS/OPS service offload method to reduce the probability of optical switching conflicts.

In Refs. 1617.18, an optical time-slice switching method was proposed. This method requires system synchronization to complete the optical time-slice service demarcation. Furthermore, a centralized controller is required to allocate the link resources before service transmission. In Refs. 5, 19, and 20, a timeslot-based optical switching method was proposed; however, this approach requires strict network synchronization and a ring topology. References 3, 8, 21, and 22 provide a hardware basis for the distributed competitive satellite optical burst switching (DC-SOBS) mechanism in this study. An optical signal switching of 2.3  μs was realized based on the experimental results in Ref. 8. Zhai et al.3 implemented a novel OBS switching hardware platform that is suitable for implementing the DC-SOBS switching method proposed in this paper. Since a wavelength division multiplexing system needs high power, and wavelength-dependent crosstalk in space is currently difficult to reduce,5 we considered a single wavelength data channel for analysis and evaluation. The impact of dense wavelength division multiplexing (DWDM) on the algorithm performance and link bandwidth of this paper will be considered in the future. Wang et al.1 and Kumar et al.23 proposed a burst assembly algorithm and satellite OBS nodes structure; however, only the implementation method of optical network accessing node was proposed, and the optical switching node was not involved.

In this study, a distributed switching mechanism based on OBS was proposed considering the characteristics of high-orbit satellite link transmission delay. The proposed mechanism does not require network time synchronization or a central control node. Instead, each node in the network completes the scheduling and switching independently and supports both end-to-end guaranteed and hop-to-hop competitive transmission services. Therefore, this technique not only ensures transmission reliability but also considerably improves the utilization of satellite links and is suitable for Geostationary Orbit (GEO) satellite optical networks with fewer satellites.

In the conventional OBS method, a fixed processing time is preset between the burst data packet (BDP) and burst control packet (BCP) to ensure that each switching node in the network can successfully schedule and switch BDPs. In the proposed method, the fixed processing time is categorized into four parts, namely, registration, conflict detection, conflict information distribution, and switching scheduling to support the proposed DC-SOBS mechanism. The nodes in the network do not require time synchronization. To reduce the computational complexity of nodes, a variable of the length of the scheduling time window is introduced to divide the scheduled BDPs into various subsets for processing. The algorithm of BDP collision detection and results distribution is applied to send conflict information to back-end switching nodes. Each switching node obtains the conflict state of the previous nodes and adopts the same scheduling algorithm to realize consistent switching results with related nodes. Because a conflict chain is formed between BDPs, which may lead to an infinite link of conflict information, two BDP hop count constraint variables are introduced to limit the conflict chain in a specific range to ensure that the transmission of conflict information can be completed within a limited time range.

2.

DC-SOBS Mechanism Design

2.1.

Satellite Network Topology

The satellite optical switching network includes two nodes, namely, a boundary-accessing node and an internal switching node. The function of photoelectric aggregation and de-aggregation processing is completed at the accessing node of the satellite optical switching network.1,24,25 The optical switching function is processed at the internal nodes of the network. As shown in Fig. 1(a), in a six-GEO-satellite ring network each GEO satellite completes the accessing and switching functions. The relationship between the number of GEO satellites and coverage can be seen in Ref. 26. As the number of GEO satellites increases, the global coverage gap decreases until multi-satellite coverage of key regions is achieved. In this study, the same six-GEO-satellite ring network as in Ref. 5 is used for analysis and latency performance comparison, and the proposed network topology can achieve a better two-GEO-satellite coverage. The satellite optical switching network domain is independent of the electrical network domain. Independent addressing and routing are performed in an optical switching network, and the accessing node of the optical switching network includes the gateway function.

Fig. 1

A typical ring network composed of six GEO satellites: (a) network structure and (b) network topology and ports connection relationship.

OE_61_6_066106_f001.png

2.2.

Data Format and Switching Type

BDP and BCP are transmitted in the optical switching system. BDP is transparently transmitted by the aggregation of the boundary nodes of the network. BCPs include burst registration packets (BRPs), burst registration response packets (BRPas), and switching conflict detection packets (BCoLs). BRPs are generated at the boundary source node of satellite optical networks, BCoLs are generated at optical switching nodes, and BRPa are generated at all switching nodes passing through BDPs. Detailed descriptions of the various types of packets are presented in Tables 1Table 2Table 3Table 4Table 56.

Table 1

Data types of the optical switching network.

ClassificationNameFunction
BDPBDPTransparent optical data formed by aggregation of optical network boundary nodes
BCPBRPBRP according to the information carried by BDP. Each BRP corresponds to a BDP
BRPaIndicates whether the BDP successfully obtains link resources. Each BRPa corresponds to a BDP
BCoLThe optical network switching node generates BCoL according to the conflict detection results

Table 2

Format of the i-th BCP.

FieldIdentificationDescription
Destination addressDstiSame as the corresponding BDP destination address
Source addressSrciSame as the corresponding BDP source address
LengthLeniPacket length, including header
BDP ID.BdIDiCorresponding BDP frame ID number
TypeTypeiDistinguish BRP, BRPa, and BCoL
PayloadPayloadiFrame payload
CheckCheckiCRC-32 check value of the whole frame

Table 3

Format of the i’th BRP.

FieldIdentificationDescription
BRP typeBRTypiDistinguish corresponding BDPi whether reservation or competitive data
Reservation typeRevTypiLink reservation or release
BDP relative arrival timeBdArrTiArrival time of BDPi relative to BRPi
BDP current hopsBdHopCiThe number of hops before BDPi reaches the current node

Table 4

Format of the i-th BRPa.

FieldIdentificationDescription
BRPa typeBRaTypiDistinguish corresponding BDPi whether reservation or competitive data
Reservation flagRevStateiThe state of BDP obtain link resources, busy or free
Release flagRelStateiThe state of link release, success or fail

Table 5

Format of the i-th BCoL.

FieldIdentificationDescription
BCoL typeBCoLTypiDistinguish corresponding BDPi whether reservation or competitive data
Num of collision unitColNumiNumber of collision units
Collision unit 1-NbcoliContent of collision units

Table 6

Format of the first collision unit of the i-th BCOL.

FieldIdentificationDescription
Collision BDP IDBColIDilThe ID of collision BDP in this unit
Collision BDP typeColBdTypilThe type of collision BDP in this unit, reservation or competitive data
Collision hop numBColHilNumber of hops the collisions occur
Collision BDP arrive timeBColArrTilArrive time of collision BDP

Each optical link is composed of a control channel and data channel, which transmit BCPs and BDPs, respectively. The accessing node of the satellite optical network completes the generation, transmission, and reception of BCPs and BDPs. The switching node can obtain the arrival time, type, destination address, and other information of each BDP through a corresponding BRP (one type of BCP) and send the switching collision result, BCoLs, to other nodes.

To satisfy the transmission requirements of various types of services, the proposed satellite optical switching method supports both random collision switching and link reservation switching, as shown in Fig. 2. The boundary node of the satellite optical network sends the corresponding BRP before sending the BDP. The time interval between BRPs and BDPs is reserved according to the total processing time of various types of BCPs in the optical network. The arrival time and switching information of BDPs can be obtained after receiving BRPs. After receiving the transmission failure of a BDP, the source node resends the BDP after randomly backing off a time value in a scheduling cycle until it is successfully sent.

Fig. 2

Two switching types supported by DC-SOBS: (a) competitive switching method and (b) reservation switching method.

OE_61_6_066106_f002.png

2.3.

DC-SOBS Mechanism

At the boundary source node of the optical switching network, the electrical packets sent to the same destination node through the optical network are aggregated according to the BDP format, which is temporarily stored in the high-speed BDP buffer for transmission. Because all-optical transparent transmission and switching is used for the BDP in the data channel of the network, no specific packet format is required when generating a BDP. A functional diagram of the accessing node is shown in Fig. 3. The length of a BDP is specified as a fixed value TBDP, and TG is the protection time of the optical switching operation at the beginning and end of a BDP. The effective length of a BDP is TBl, which is related to TG as shown in Eq. (1). If the remaining space is not sufficient to encapsulate the last electrical frame, this BDP encapsulation is completed and the electrical frame is encapsulated into the next BDP. When the current BDP transmission time is reached, regardless of whether the BDP is encapsulated, the electric packets are transformed to a fixed-length BDP and sent to the optical network. The BDP de-aggregation converge at the destination boundary accessing the node of the optical network. Each BRP and BDP are a group, and the transmissions of each group are independent of each other. Because the processing time of each optical switching node is reserved, the BRP sends first at time tSbr, and then sends the corresponding BDP at time tSbd. The offset interval between each BRP and BDP is Toffset, which satisfies Eq. (2) (Fig. 4)

Eq. (1)

TBDP=2TG+TBl,

Eq. (2)

Toffset=tSbdtSbr.

Fig. 3

Functional diagram of the accessing node.

OE_61_6_066106_f003.png

Fig. 4

Functional diagram of the switching node.

OE_61_6_066106_f004.png

After the optical switching nodes receive BRPs and BCoLs, they complete the switching processing independently. Each optical switching node processes four algorithms, namely, BRP relay, BDP collision detection, BDP collision results send, and BDP switching scheduling. BDP collision detection should be performed at each current node, and the detection results are sent to the corresponding later nodes. After si receives the detection result of the previous hop nodes, it executes the BDP switching scheduling iteration to obtain the scheduling results and configure the optical devices. A functional diagram of the optical switching node is shown in Fig. 3. When the timer is equal to the schedule time point, the switching node executes the scheduling of all the BDPs in a subset schedule window. Each egress port has a scheduling unit, and the unit obtains the results within a fixed time and configures the optical devices to complete switching. The time relationship of each egress port is shown in Fig. 5. For   BDPi, BdArrTi is obtained when BRPi arrives at the switching node. When the most recently arrived time of the BDP is equal to the reserved scheduling time TSP, the switching node starts the BDP switching scheduling. One switching scheduling completes the scheduling of all BRP in one scheduling window time tWD. All BRPj with arrival times BdArrTj satisfying Eq. (3) require scheduling in the current scheduling cycle and results. According to the scheduling result, performing the configuration operation before BDPi arrives completes the switching function. The trigger time of the next scheduling cycle is determined by the arrival time of the first BDPk outside the current scheduling window

Eq. (3)

BdArrTiBdarrTj<BdArrTi+tWD.

Fig. 5

Temporal relationship between BRP11, BRP12, BRP22, and their respective associated BDPs in scheduling subsets 1 and 2.

OE_61_6_066106_f005.png

Formally, the topology of a network in Fig. 1(b) is defined by an undirected graph G(S,L), where the switching nodes in satellites are vertices S and the communication links connecting vertices are edges L. si represents the i’th switching node, S={s1,s2,,sN}, and the total number of switching nodes in the network is N. li represents the i’th laser link, L={l1,l2,,lM}, and the total number of links in the network is M. B represents the set of BDPs in the optical switching network, bi represents the i’th BDP, and B={b1,b2,}. Br represents the set of BRPs, bri denotes the BRP that bi corresponds to. Br={br1,br2,}. Vi(Si*,Pi,Ti) indicates a set of switching nodes, egress ports, and time slices; bi is occupied in the optical switching network, where Si* represents the switching node set; Pi represents the bi switching egress ports set; and Ti is a set of bi arrival times. hi is the total number of switching nodes passed by bi. vi(sik*,pik,tik) indicates that bi at tik arrives at sik*, and the output from the egress port pik. Si*={si1*,si2*,,sihi*}, P={pi1,pi2,,pihi}, and T={ti1,ti2,,tihi}.

Because the DC-SOBS mechanism is adopted in each switching node, node sq is used to determine whether bi can successfully reach when performing bi switching scheduling. Therefore, obtaining all the BCoLs of bi is necessary before sq,   q(1,q1), and the same scheduling algorithm is used to iteratively obtain the scheduling result of sq. Because bj, which conflicts with bi in one of q1 nodes before, may also conflict with bk at any other node, a conflict chain is formed between bi, bj, and bk in the network. Each collision requires a scheduling iteration to obtain passable bi, bj, and bk. The collision chain of BDPs is shown in Fig. 6. When scheduling BDP1 in S7, not only the conflict in S7, but also the scheduling results of BDP1 to BDP5 from S1 to S6 should be considered because a collision chain is formed from BDP1 to BDP5. The dashed BDP in Fig. 6 shows the BDP discarded at the current switching node.

Fig. 6

Collision chain diagram of BDP1 in switching node S7. A dashed background displays a discarded BDP in the current switching node.

OE_61_6_066106_f006.png

To limit the number of scheduling iterations in each switching node within a certain range, limiting the range of the collision chain is necessary. Two BDP hop count constraints were added for each BDP: TTLbh, collision hops, and TTLbc, cumulative collision times. TTLbh is used to control the length of each collision chain branch. One variable TTLbhi corresponds to one bi, and the initial TTLbhi=TTLbh. When a conflict exists in bi, TTLbhi reduces by 1 and discards bi as the value of TTLbhi is 0. The function of TTLbc is to control the number of branches in a collision chain. One variable TTLbci also corresponds to one bi, and the initial TTLbci=TTLbc. The counting rule takes the minimum TTLbci of all conflicting BDPs to 1 as the new TTLbcs of all collision BDPs. Figure 7 shows a collision example of the set TTLbc=4. Because two collision constraints (TTLbh and TTLbc) are set, the current BDP is directly discarded when a collision occurs at the number of hops from TTLbh to Hmax. Therefore, when the conflict of the next BDP occurs from 1 to TTLbh1, the collision chain will continue to link to the next BDP, as displayed in Fig. 7. All BDPs in the collision chain are discarded in the next switching node when the TTLbc constraint is satisfied. Thus, the maximum hop length of collision chain LColMax is given by Eq. (4). The iteration cycles parameter is set to Nsch=LColMax. Hmax is the maximum number of hops in the BDPs in the satellite optical switching network.

Eq. (4)

LColMax=Hmax1+(TTLbc1)(TTLbh1).

Fig. 7

Schematic diagram of the maximum length of the collision chain when TTLbc=4.

OE_61_6_066106_f007.png

The switching nodes execute the DC-SOBS mechanism proposed in this study, including BRP relay, BDP collision detection, BDP collision results distribution, and BDP switching scheduling in four steps. BRPs relay register the arrival time, forwarding address, remaining hops, and other information of BDPs. The BRP relay processing is presented in Algorithm 1.

Algorithm 1

BRPs relay

Require:bri;
Ensure: Arrival information table ArriInfTable and a new bri*;
 Receive bri of all control channels for OE and start the data processing timer TimerBR when the data arrives;
if the BCP is correct
  Identify the data type;
  ifTypi=BRP AND BRTypi=CompetitiveData is competitive data then
   Record the value of Dsti, Srci, BdNumi, BdArrTi, BdHopCi, BdHopRi in bri to ArriInfTable;
   Update BdArrTi=BdArrTitBRmax,BdHopCi=BdHopCi+1,BdHopRi=BdHopRi1;
   Form a new bri* frame;
   According to Dsti, obtain the send port of bri*;
   ifTimerBR=tBRmaxthen
    Send the new bri* frame;
   end if
  else
   Exit BRPs relay algorithm;
  end if
else
  Discard the BCP;
end if

After receiving a BRP, the switching node opens the RRP relay wait time window, as shown in Fig. 5, and starts the BDP collision detection when the window timer satisfies the constraint. The relay wait time window of bri is tcwi, as given by Eq. (5). BDP collision detection detects whether bri has a conflict and then generates BCoLs according to the collision detection results. The BDP collision detection process is shown in Algorithm 2. When the accessing node of the network generates BDPs, the transmission interval between each BRP and BDP is fixed, and the processing time overhead is introduced when it passes through a switching node. The BRP processing delay is reserved according to the maximum BRP processing delay for all switching nodes. Therefore, the time interval between the BRP and BDP after each switching node is reduced by tBRmax. If a switching node receives a bri of bi, then brj of other bj in conflict with bi will arrive within tcwi. When the wait timer Timercwi=tcwi, it executes the steps in Algorithm 2 to search ArriInfTable to find conflict BDPs and generate a BCoL frame. When the BCoL processing timer TimerBcoli=tBCmax the switching node sends the BCoL frame

Eq. (5)

tcwi=(HmaxBrHopCi)tBRmax+TBPD.

Algorithm 2

BDPs collision detection

Require:BdArrTi in ArriInfTable;
Ensure: BCoL frames and collision table ColTab;
 After step of line 5 in Algorithm 1, start BDP collision detection;
 Open BRPs relay wait time window corresponding to bri and start wait timer Timercwi;
 Obtain the output port according to the Dsti of bri;
 Record the BdNumi and BdArrTi of all bri in the window;
ifTimercwi=tcwithen
  Start BCoL processing timer TimerBcoli;
  for all bri, brj, in the window
   if  i,j(BdArrTi+TBDP)(BdArrTj+TBDP)Ø AND same outport then
    Record the information of BdNumi, BRTypi, BdArrTi and BdHopCi of bri and brj in ColTab;
    According to the information, generate bcol(i,j..)m;
    According to bcol(i,j..)m generate BCoL frame;
    ifTimerBcoli=tBCmaxthen
     Send BCoL frame;
    end if
   end if
  end for
end if

After sij* collision detection is complete, the node starts the BDP collision results distribution, as shown in Algorithm 3. The collision result distribution ensures that the BCoLs related to bi generated by sik*,   k(1,j1) in the network are received before sij* starts switching scheduling of bi. Because collision detection is performed independently in each switching node according to the received bri to obtain the collision results, the BCoLs are sent according to the destination address of all relevant bi in the conflict results. All switching nodes,   sil*, l(j+1,hi), in the bi path Vi(Si*) can obtain the collision information of sij*. However, hidden conflicts may occur in the subsequent switching nodes; that is, the previous switching node does not send the corresponding BCoLs to the subsequent switching nodes. The collision hiding and information replications are shown in Fig. 8. For example, the partial switching paths of BDP1 are si1*, si, and sj. The partial switching paths of BDP2 are si1, si, and si+1. In egress port A of the switching node si1, BDP1, and BDP2 conflict, BDP2 forward from egress port A of switching node si1 to the switching node si with scheduling, and BDP1 is discarded. In switching node si egress port A, BDP2 and BDP3 conflict, and BDP3 forward to the switching node si+1 from egress port A of si with scheduling, and BDP2 is discarded. If there is no conflict at switching node si+1, BDP3 passes through switching node si+1 to switching node si+2. BDP4 and BDP3 conflict at switching node si+2. The scheduling result discards b3 and allows b4 to pass. Because the collision information of BDP1 and BDP2 can only be obtained at the switching node in their switching paths, the switching node si+2 cannot obtain the collision information of BDP1. However, when scheduling BDP3 and BDP4, the switching node si+2 considers the collision between BDP2 and BDP3 in the switching node si, and BDP1 and BDP2 in si1, and determines whether BDP3 is dropped in the previous nodes. Here, BDP1 is the hidden conflict BDP of switching node si+2. Therefore, it is necessary to copy BCoL12, the collision detection results of BDP1 to switching node si+2, and related subsequent switching nodes. Each egress port performs BCoL processing independently.

Algorithm 3

BDP collision results distribution

Require: ColTab and received BCoL
Ensure: New ColTab and BCoLs;
 Start the BCoL sending timer TimerCR;
 Record BColHi, BColNumi, BColArrTi, and BColTypi of the BCoL;
for all collision unit bcol(i,j..)m in the BCoL do
  According to BColNumi of Collision Units search ColTab;
  ifBColNumi exist in the collision information table then
   Record all BColNumj conflict with BColNumi in ColTab;
   for all BColNumjdo
    According to BColNumj search Collision Units in the BCoL;
    ifBColNumj does not appear in the Collision Units then
     According to Dsti of BColNumj copy the BCoL payload, and generate a new BCoL;
    end if
   endfor
  end if
endfor
Update the pervious BCoL, and send the new BCoLs;

Fig. 8

Hidden BCoL and its replication. BCoL12 is a hidden BCoL for switching nodes from si+2, replicated at switching node si+2, and forwarded to subsequent switching nodes.

OE_61_6_066106_f008.png

As shown in Fig. 5, when the latest BDP arrival time satisfies the constraints in Eq. (6), the switching scheduling of all BDPs in the schedule window is performed. Here, tSHmax is the scheduling time, and tSCmax is the computation time. During BDP switching scheduling, BDP validity detection is performed first, and each egress port conduct BDPs validity detection independently. By iterating the received BCoLs of the previous switching node, the BDPs that can reach the current switching node are screened, and then these BDPs are scheduled to obtain the results. According to the results, the optical switching device is configured, and switching is completed when the BDPs reach the switching node. The BDP switching scheduling is displayed in Algorithm 3.

Eq. (6)

  BdArrTi=tSHmax+tSCmax.

First, the identification numbers of all bi in this scheduling cycle are obtained, then the collision information of these bi is searched in ColTab to obtain the conflict status of these bi in the previous switching nodes. BDP validity detection and switching scheduling were performed according to the collision tree. Each node of the collision tree is a conflict-switching node of the BDP switching path, and the information in the node is the collision information related to the BDP at the conflict-switching node. The next-level conflict tree node is established according to the conflict information. Each upper-level switching node of the BDP from various ingress ports that conflicts in the current node correspond to a lower-level collision tree node. The root node is the current switching node, and the construction method for the collision tree is shown in Fig. 9. BDP1 and BDP4 establish a collision tree at switching node si. All nodes at the same level in the collision tree are simultaneously judged.

Fig. 9

Schematic diagram of conflict tree construction for BDP1 and BDP2 at root switch node si.

OE_61_6_066106_f009.png

If the collision tree level of a BDP is greater than or equal to Nsch, it indicates that the BDP exceeds the collision-restriction conditions, and the BDP is directly discarded. BDPs that are not directly discarded are judged iteratively from the leaf node to the root switching node. According to TTLbc and TTLbh, the BDP validity detection is judged to determine whether the BDP in the current switching node is effective. If BDP is effective, the BDP is valid. If the BDP is discarded, the BDP is invalid. Then, the next BDP collision tree judge is performed. After completing all BDP judges of the current scheduling cycle, the judge results of the current node are obtained.

In this study, a simple first-come-first-service (FCFS) scheduling strategy is used to verify the realizability and performance of the switching method. After the BDP validity detection is performed in each scheduling cycle, the BDP valid at the current switching node is scheduled, and the reserved BDP is scheduled first. The reserved BDP can pre-empt competitive BDP. Because BDPs at the end of the current cycle may span the next scheduling cycle and affect the BDP scheduling of the next cycle, the BDP of the previous cycle should be considered when scheduling the next scheduling cycle. The specific switching scheduling algorithm is as follows:

Eq. (7)

ttlbcll  (i:j)=mink  (i:j)(ttlbck)1.

Algorithm 4

BDP switching scheduling

Require: ArriInfTable, ColTab, TTLbc, TTLbh;
Ensure: Config optical devices;
for all bl in scheduling window and not conflict with reservation BDPs do
  Search ColTab for bl and build collision tree for bl;
  if the length of collision tree of bl is greater than Nschthen
   Discard current bl;
  else
  Set counters of ttlbcl=TTLbc and ttlbhl=TTLbh for each BDP in the collision tree;
  for from the leaf nodes to the root nodes of the collision tree do
  ifttlbhl==0 or ttlbcl==0 or BdHopRl>ttlbhlthen
   Discard current bl;
  else
   for all of other bm except bl in the current leaf node do
    ifttlbhm==0 or ttlbcm==0then
      Discard bm;
    end if
   endfor
   Accord to FCFS principles select valid BDPs;
   Update the ttlbc for all conflicting BDPs in the current leaf node according to Eq. (7);
   Update the ttlbh=ttlbh1 for all conflicting BDPs in the current leaf node;
   Discard not valid BDPs and Record results;
  end if
endfor
end if
endfor
Send BRPas and config optical devices;

2.4.

Analysis of transmission interval Toffset

The reserved processing time in Toffset includes four parts: BRP relay wait time TBR of the switching node, collision detection time TCD, BDPs collision results distribution time TCR, and switching scheduling time TSP. As shown in Eq. (8), with the DC-SOBS mechanism, BRP processing in the electrical domain introduces processing overhead time, whereas BDP does not introduce this time when passing through the switching node. Therefore, the time interval between BDP and BRP is shortened when passing through a network-switching node. When a network-accessing node sends BRP and BDP pairs, it is necessary to ensure that the time interval Toffset between BDP and BRP can still satisfy the processing time of the last optical switching node when the BDP reaches the last switching node on the switching path. The reservation-time relationship of each part in Toffset is shown in Fig. 10, and a quantitative analysis and description of Toffset are as follows:

Eq. (8)

Toffset=TBR+TCD+TCR+TSP.

Fig. 10

Detailed description of Eq. (8) including obtaining the time values of the components of Toffset.

OE_61_6_066106_f010.png

First, TBR is analyzed. The BRP relay processing time of bi reserved in Toffset is TBR(bi), and TBR(bi) satisfies the constraints of Eq. (9), where tBRj represents the BRP processing time of sij*, and hi represents the number of switching nodes passed by bi

Eq. (9)

TBR(bi)j=1hitBRj+TBDP.

Second, the BDP collision detection time TCD is analyzed. The time switching node sn performs BDP collision detection is tCDn, and each sn has a different tCDn, so it is difficult to obtain tCD1,tCD2,,tCDN for any of the accessing nodes. Let tCDmax be the upper maximum value of tCDn, as shown in Eq. (10). Thus, we take TCD=tCDmax

Eq. (10)

tCDmax=maxnS(tCDn).

Third, the BDP collision results distribution time TCR is analyzed. After completing the BDP collision detection, the detection results (BcoLs) should be sent to all relevant subsequent switching nodes, which should be received by all BCoLs before they perform switching scheduling. The maximum scheduling iteration period is Nsch. The TCR reserves time for receiving the BCoLs of Nsch-level switching nodes. Therefore, the TCR and TBR time analyses are similar. We set tCRmax as the upper limit maximum value of tCRn, as expressed in Eq. (11). TCR satisfies the constraint in Eq. (12), where TBCoL is the sending time of BCoL

Eq. (11)

tCRmax=maxnS(tCRn),

Eq. (12)

TCR=(Hmax1+(TTLbc1)(TTLbh1))tCRmax+TBCoL.

Fourth, the BDP switching scheduling time TSP is analyzed. TSP includes a fixed schedule window time tWD, switching scheduling time tSHmax, and configuration time tSCmax. The fixed window time tWD is a constant value for the user set. The switching schedule time for each sn is distinct. Here, tSHmax is set as the maximum of tSHn, which is the switching schedule time of S, as displayed in Eqs. (13–15)

Eq. (13)

tSHmax=maxnS(tSHn),

Eq. (14)

tSCmax=maxnS(tSCn),

Eq. (15)

TSP=tWD+tSHmax+tSCmax.

In summary, the time interval Toffset is expressed by the following equation

Eq. (16)

Toffset=Hmax(tBRmax+tCRmax)+((TTLbc1)(TTLbh1)1)tCRmax+TBDP+TCD+TBCoL+TSP.

3.

Simulation and Discussion

The optical switching method proposed in this study was evaluated through simulation by constructing a network simulation environment consisting of six uniformly distributed ring-shaped GEO satellites labeled S1 to S6. Each GEO satellite had two receiving and two transmitting intersatellite laser links with two adjacent GEO satellites, one of which was a BCP transmission link and the other was a BDP transmission link. Each satellite switching node was clockwise for port 0 and counterclockwise for port 1. GEO satellites provide communication services for ground equipment through microwave links: ground port 2. Each GEO satellite performs the function of photoelectric convergence and de-convergence of ground equipment electric domain data and intersatellite optical domain data, as well as GEO intersatellite BDP switching. The network simulation topology is shown in Fig. 1(b).

The orbital altitude of each GEO satellite was 35,786 km, and the distance between the GEO satellites was 42,166 km. Therefore, the transmission delay of the intersatellite links was set to 141 ms, and the intersatellite BDP link rates were 10, 40, and 100  Gb/s with one wavelength.16 Because a six-node ring network was formed, the maximum number of hops in the optical switching network was Hmax=3, TTLbc=2, and TTLbh=2. The scheduling window width tWD was set to 1 ms, and the pre- and post-protection interval was TG=5  μs. Here, TBDP was set to 100, 200, and 500  μs.

The satellite platform adopted a field-programmable gate array (FPGA) to implement the scheduling algorithm. The BCP link rate was set to 10  Gb/s. A Xilinx xc7vx690tffg1927-2 FPGA was selected to process the platform’s switching controls and evaluate the setting of the simulation parameters tBRmax, tCRmax, tCDmax, tSCmax, and tSHmax. The FPGA operating clock frequency was set to 156.25 MHz (the clock cycle was 6.4 ns). The interface processing section used two Xilinx 10G IP cores with 64-bit data width and 512-bit internal data width. Using the lengths of different types of BCPs and the algorithm processing times, tBRmax, tCRmax, tCDmax, tSCmax, and tSHmax were calculated (see Table 7)

Table 7

Evaluation of satellite payload simulation parameters according to Xilinx xc7vx690tffg1927-2 FPGA.

Parameter nameFrame lengthSend and receive timeProcessing timeTotal time (ns)Set value (μs)
tBRmaxBRP 32-Byte6.4  ns×8=51.2  ns6.4  ns×(20+20)=256  ns307.20.5
tCRmaxBCoL 84-Byte6.4  ns×12=76.8  ns6.4  ns×(20+20)=256  ns332.80.5
tCDmaxN/AN/A6.4  ns×(5×100)=3200  ns32005
tSCmaxN/AN/A6.4  ns×20+2300  ns=2428ns24285
tSHmaxN/AN/A6.4  ns×3×1000=19200  ns1920020

tBRmax consists of sending and receiving BRP data and the processing time. It takes eight clock cycles to send and receive BRP frames, and another 20 cycles to process and update BRP registration and frame information, respectively, so the estimated processing time of tBRmax in the FPGA was 307.2 ns, and 500 ns was selected as a typical value for simulation. The value of tCRmax is similar to tBRmax, but the BCoL frame is 84 bytes, therefore, four transceiver processing cycles were added. tCRmax had a predicted processing time of 332.8 ns within the FPGA, and 500 ns was selected as a typical value for the simulation. The conflict detection process was completed by table lookup and comparison in the FPGA. Since the maximum number of hops in the network was three, and the maximum depth of conflict table entries was set to 100, the FPGA can meet the requirements of storing conflict information. Five clock cycles are used at each instance of completing the table entry data reading and conflict detection; therefore, the estimated processing time of tCDmax in FPGA was 3200 ns, and 5  μs was selected as a typical value for simulation. tSCmax contains two parts: FPGA control signal generation and optical device configuration; where the control signal generation time is 20 cycles and the optical device configuration time is 2300 ns according to Ref. 8. Therefore, tSCmax was 2428 ns, and 5  μs was selected as a typical value for simulation. Since the scheduling needs to iterate according to the collision tree and judge all the collision data, the processing clock period of each cycle is 1000, and the maximum number of hops in the network is three, the estimated processing time of tCRmax within the FPGA is 19.2  μs, and 20  μs was selected as a typical value for the simulation. The performance of DC-SOBS, conventional OCS, and reservation optical burst switching (R-OBS) mechanisms were compared and analyzed in three aspects: link utilization, packet loss rate, and delay through simulation.

Each satellite node sends BDPs to the surrounding random one-hop, two-hop, and three-hop satellite nodes to compare the satellite optical network throughput and link utilization with the DC-SOBS, R-OBS and OCS methods proposed in this article, Refs. 9 and 10, and Refs. 27 and 28, respectively. The BDPs generated by each switching node obey a Poisson distribution.29 Each interstellar link load was from 10% to 100% and the BDP lengths were 100, 200, and 500  μs. With R-OBS, a centralized control node was required to complete the management and allocation of network link resources, and the centralized control node function was placed at node 1 in Fig. 1(b).

The simulation results shown in Fig. 11 reveal that the satellite link utilization was extremely low under R-OBS because of the link transmission delay, and the maximum link utilization was only 0.0038. With the OCS, the link cannot be multiplexed for various destination BDPs. In a one-hop scenario, the link utilization with the circuit switching approach increased linearly with the network load and can reach the maximum link utilization of 1. However, in the two-hop and three-hop randomly generated BDP simulation environment, the link utilization decreases considerably with the increase in the hop, which is only 0.25 and 0.11 under 100% load. Compared with the two conventional switching approaches, the maximum link utilization was achieved under one-hop, two-hop, and three-hop scenarios using the proposed DC-SOBS approach. In the three-hop simulation scenario, when the network load reached 100%, the maximum value of link utilization was 0.563, 0.415, and 0.37 for the BDP length of 500, 200, and 100  μs, respectively. In the two-hop simulation scenario, the link utilization was 0.659, 0.553, and 0.512 for a BDP length of 500, 200, and 100  μs, respectively. In the one-hop scenario, the link utilization was the same as that in the OCS.

Fig. 11

Link utilization results in six-node satellite optical network for DC-SOBS, OCS, and R-OBS with random data destination addresses and BDP lengths of (a) 500; (b) 200; and (c) 100  μs.

OE_61_6_066106_f011.png

The results for the BDP loss ratio are shown in Fig. 12. With the R-OBS method, the entire links for a BDP should be reserved before sending, so the BDP loss ratio is 0. With the OCS method, the packet loss rate increased because of the increase in switching hops as the random BDP destination generation method resulted in more BDPs being outside of the OCS-established links. The maximum packet loss rate was reached in three-hop simulation conditions, which was 88.7%. With the DC-SOBS method, the trend of the BDP loss ratio was similar to the trend of its link utilization. The DC-OBS mechanism exhibited a considerably superior BDP loss rate than the OCS. Under the same network load condition, as the BDP length decreased, the BDP arrival rate increased, and thus the conflict probability increased. The maximum packet loss rate was 63% when the network load reached 100%, the BDPs length was 100  μs, and there were three hops.

Fig. 12

Loss ratio results in a six-node satellite optical network for DC-SOBS, OCS, and R-OBS with random data destination addresses and BDP lengths of (a) 500; (b) 200; and (c) 100  μs.

OE_61_6_066106_f012.png

Because only one end-to-end link can be established at the same time in one optical link using the OCS mechanism, link multiplexing is not possible, which leads to infinite transmission delay of part of the BDPs at randomly generated destination address simulation scenarios. Therefore, the transmission delay simulation was used to compare only the performances of the R-OBS and DC-SOBS mechanisms. The results of the time-delay simulations are shown in Fig. 13. Using the R-OBS method, the average transmission delay increased as the network load and hop increased, and the BDP length decreased. The R-OBS method reached a minimum delay of 14.1 s at the minimum network load of 10%, and a maximum delay of 2878.9 s when the network load reached 100%. With the DC-SOBS method, latency was primarily affected by the number of retransmissions of the BDPs. The maximum average transmission delay was 0.55 s when the network load was 100%, three-hop, and BDP 500  μs. This simulation result is a significant improvement on the 1.8 s of Ref. 27. The minimum average transmission delay was 0.14 s when the network load was 10%, one-hop, and BDP 100  μs. Therefore, the simulation comparison reveals that the latency performance can be improved by three orders of magnitude.

Fig. 13

Transmission delay results in the six-node satellite optical network for DC-SOBS and R-OBS with random data destination addresses and BDP lengths of (a) 500; (b) 200; and (c) 100  μs.

OE_61_6_066106_f013.png

The BDP length of 500  μs was selected for link throughput and buffer size simulation analysis. The single link throughput and buffer sizes used were compared for one, two, and three hops for random services with 10, 40, and 100  Gb/s link rates. Since data is only buffered when the source node forms BDPs, the buffer sizes were the average value for each optical link to reach the transmit set load ratio. As seen in Fig. 14(a), the link throughput was mainly affected by the number of hops for the same link rate condition. At 100  Gb/s link rate, the maximum throughput of 98  Gb/s (including a 10  μs protection interval) can be achieved at one hop without conflict, and as the number of hops increases, the link throughput decreases to 55  Gb/s at three hops. The reason for this phenomenon is that as the number of hops increases, the probability of BDPs conflict increases, resulting in a decrease in the link throughput. As can be seen from the results in Fig. 14(b), the average buffer size required per link increased with the link rate and number of hops, reaching a maximum of 5600 MB at 100  Gb/s in three-hop conditions. Since the average buffer size is proportional to the data transmission delay, the increase in hop count increases both the transmission delay and BDPs conflict probability, resulting in an increase in the average BDPs end-to-end delay; therefore, the average buffer sizes increases.

Fig. 14

(a) one link throughput and (b) average buffer sizes results in the six-node satellite optical network for DC-SOBS with BDP length of 500  μs. Different link rates of 10, 40, and 100  Gb/s are compared to one, two, and three hops.

OE_61_6_066106_f014.png

4.

Conclusion

In this study, a novel DC-SOBS mechanism was proposed for satellite ultra-high-delay laser links. This mechanism supports both end-to-end reservation and hop-to-hop competition switching types depending on the requirements of various quality of services. The processing time of the optical switching node is preset in the offset time between each BCP and BDP pair. Each optical switching node independently schedules the BDP according to the received information to realize a distributed switching schedule. To solve the problem of the collision of BDPs in distributed scheduling, BDP collision detection, BDP collision results distribution, and BDP switching scheduling were introduced. The simulation results revealed that the DC-SOBS mechanism exhibited a higher link utilization performance than R-OBS under ultra-high delay of satellite links and evaluated the throughput and buffer sizes for different link rates of 10, 40, and 100  Gb/s, ensuring satellite link rate requirements. Furthermore, compared with conventional OCS, the proposed method can realize link multiplexing and considerably improve the flexibility of the satellite optical network. From the simulation results, we can see that the performance of this study’s method is mainly affected by the data conflict probability. The conflict probability increases with an increase in the number of satellite nodes, which leads to performance degradation, and therefore is not suitable for a large number of network nodes, such as Low Earth Orbit (LEO) satellite networks. Future work will focus on performance evaluation of the hardware implementation of the algorithm and consider the use of DWDM and dividing the conflict domain to reduce the conflict probability in large network nodes.

Acknowledgments

This work was partly supported by National Natural Science Foundation of China (NSFC) (Grant No. 62102314), Natural Science Basic Research Program of Shaanxi Province (Grant No. 2021JQ-708), Scientific Research Program Funded by Shaanxi Provincial Education Department (Grant No. 20JK0923), and Scientific Research Program of Yulin City (Grant No. YF-2020-183).

References

1. 

Y. Wang, T. Chen and N. Zhou, “Space-based optical burst switching assembly algorithm based on QoS adaption,” in IEEE 11th Int. Conf. Commun. Software and Networks, 101 –105 (2019). https://doi.org/10.1109/ICCSN.2019.8905264 Google Scholar

2. 

H. Kaushal and G. Kaddoum, “Optical communication in space: challenges and mitigation techniques,” IEEE Commun. Surv. Tutor., 19 (1), 57 –96 (2016). https://doi.org/10.1109/COMST.2016.2603518 Google Scholar

3. 

H. Zhai et al., “Design and implementation of the hardware platform of satellite optical switching node,” in 19th Int. Conf. Opt. Commun. and Networks, 01 –03 (2021). Google Scholar

4. 

A. S. Karasuwa, I. Otung and J. Rodriguez, “Joint precoding and spreading in the forward downlink of multi spot-beam satellite communication system,” in 35th AIAA Int. Commun. Satell. Syst. Conf., 5403 (2017). Google Scholar

5. 

C. Wang et al., “OSBN: architecture and control mechanism of optical switched satellite backbone network,” Photonic Network Commun., (2022). Google Scholar

6. 

X. Shen, Q.-G. Zong and X. Zhang, “Introduction to special section on the China seismo-electromagnetic satellite and initial results,” Earth Planet. Phys., 2 (6), 439 –443 (2018). https://doi.org/10.26464/epp2018041 Google Scholar

7. 

Y. Ji et al., “All optical switching networks with energy-efficient technologies from components level to network level,” IEEE J. Sel. Areas Commun., 32 (8), 1600 –1614 (2014). https://doi.org/10.1109/JSAC.2014.2335352 ISACEM 0733-8716 Google Scholar

8. 

R. Zhang et al., “Ultracompact and low-power-consumption silicon thermo-optic switch for high-speed data,” Nanophotonics, 10 (2), 937 –945 (2021). https://doi.org/10.1515/nanoph-2020-0496 Google Scholar

9. 

M. Imran et al., “Software-defined optical burst switching for HPC and cloud computing data centers,” J. Opt. Commun. Networking, 8 (8), 610 –620 (2016). https://doi.org/10.1364/JOCN.8.000610 Google Scholar

10. 

M. Imran et al., “Performance analysis of optical burst switching with fast optical switches for data center networks,” in 17th Int. Conf. Transparent Opt. Networks, 1 –4 (2015). Google Scholar

11. 

W. Miao et al., “SDN-enabled OPS with QoS guarantee for reconfigurable virtual data center networks,” J. Opt. Commun. Networking, 7 (7), 634 –643 (2015). https://doi.org/10.1364/JOCN.7.000634 Google Scholar

12. 

S. Peng et al., “Multi-tenant software-defined hybrid optical switched data centre,” J. Lightwave Technol., 33 (15), 3224 –3233 (2015). https://doi.org/10.1109/JLT.2015.2438398 JLTEDG 0733-8724 Google Scholar

13. 

Y. Liu, K. C. Chua and G. Mohan, “Achieving high performance burst transmission for bursty traffic using optical burst chain switching in wdm networks,” IEEE Trans. Commun., 58 (7), 2127 –2136 (2010). https://doi.org/10.1109/TCOMM.2010.07.090047 IECMBT 0090-6778 Google Scholar

14. 

F. Yan et al., “Opsquare: a flat DCN architecture based on flow-controlled optical packet switches,” J. Opt. Commun. Networking, 9 (4), 291 –303 (2017). https://doi.org/10.1364/JOCN.9.000291 Google Scholar

15. 

H. Tode et al., “Packet offloading exploiting life-sustained optical path resources in ops/ocs integrated network,” in Photonics in Switching and Comput. (PSC), 1 –3 (2018). Google Scholar

16. 

Z. Zheng et al., “Time-sliced flexible resource allocation for optical low earth orbit satellite networks,” IEEE Access, 7 56753 –56759 (2019). https://doi.org/10.1109/ACCESS.2019.2913441 Google Scholar

17. 

S. Baoxia et al., “Hybrid circuit and packet satellite switching technology based on circuit switch,” in IEEE Int. Conf. Comput. and Inf. Technol., 642 –645 (2014). https://doi.org/10.1109/CIT.2014.161 Google Scholar

18. 

N. Hua and X. Zheng, “Optical time slice switching (OTSS): an all-optical sub-wavelength solution based on time synchronization,” in Asia Commun. and Photonics Conf., (2013). Google Scholar

19. 

Q. Zhang et al., “Constructing satellite backbone network via timeslot-based optical switching,” in Asia Commun. and Photonics Conf., Su2A–251 (2018). Google Scholar

20. 

T. Li et al., “Optical burst switching based satellite backbone network,” Proc. SPIE, 10697 106975Q (2018). https://doi.org/10.1117/12.2311465 Google Scholar

21. 

C. W. F. Parsonson et al., “Optimal control of SOAs with artificial intelligence for sub-nanosecond optical switching,” J. Lightwave Technol., 38 (20), 5563 –5573 (2020). https://doi.org/10.1109/JLT.2020.3004645 JLTEDG 0733-8724 Google Scholar

22. 

L. Paillier et al., “Space-ground coherent optical links: ground receiver performance with adaptive optics and digital phase-locked loop,” J. Lightwave Technol., 38 (20), 5716 –5727 (2020). https://doi.org/10.1109/JLT.2020.3003561 JLTEDG 0733-8724 Google Scholar

23. 

K. A. Kumar et al., “Scheduling approach for optical burst switching nodes based on hybrid multipriority algorithm,” Opt. Eng., 61 (3), 036112 (2022). https://doi.org/10.1117/1.OE.61.3.036112 Google Scholar

24. 

S. J. B. Yoo, “Optical packet and burst switching technologies for the future photonic internet,” J. Lightwave Technol., 24 (12), 4468 –4492 (2006). https://doi.org/10.1109/JLT.2006.886060 JLTEDG 0733-8724 Google Scholar

25. 

L. Li, L. Qiao and Q. Chen, “Design and implementation of multi-priority hybrid threshold scheduling algorithm for edge nodes of satellite OBS networks,” in Int. Conf. Commun., Inf. Syst. and Comput. Eng., 195 –198 (2019). Google Scholar

26. 

W. Zhang et al., “Hybrid GEO and IGSO satellite constellation design with ground supporting constraint for space information networks,” in IEEE 18th Int. Conf. Commun. Technol., 755 –761 (2018). https://doi.org/10.1109/ICCT.2018.8600118 Google Scholar

27. 

H. Nagai et al., “Design and verification of large-scale optical circuit switch using ULCF AWGs for datacenter application,” J. Opt. Commun. Networking, 10 (7), 82 –89 (2018). https://doi.org/10.1364/JOCN.10.000B82 Google Scholar

28. 

A. Pagès et al., “Performance evaluation of an all-optical OCS/OPS-based network for intra-data center connectivity services,” in 16th Int. Conf. Transparent Opt. Networks, 1 –4 (2014). Google Scholar

29. 

N. Barakat and E. H. Sargent, “Analytical modeling of offset-induced priority in multiclass OBS networks,” IEEE Trans. Commun., 53 (8), 1343 –1352 (2005). https://doi.org/10.1109/TCOMM.2005.852845 IECMBT 0090-6778 Google Scholar

Biography

Zhe Zhao received a BSc degree in communication engineering from Guilin University of Electronic Technology, Guilin, China, in 2010, and an MSc degree in information and telecommunication engineering from Xidian University, Xi’an, China, in 2013. He is currently pursuing a PhD in information and telecommunication engineering in Xidian University. His research interests include satellite optical switching and deterministic networks.

Zhiliang Qiu received a BSc degree in communication engineering and an MSc and PhD degrees in communication and information systems from Xidian University, Xi’an, China, in 1986, 1989, and 1999, respectively. He is currently a professor at the State Key Laboratory of Integrated Services Networks (ISN), Xidian University. His research interests include broadband networks and switching technology.

Weitao Pan received a BSc degree from the School of Technical Physics at Xidian University in 2004. He received his PhD was received from the School of Microelectronics of Xidian University in 2010. He is currently an associate professor in State Key Laboratory of Integrated Service Networks, Xidian University. His current research interests include VLSI design methods and post-silicon verification.

Hanwen Sun received a BSc degree in communication engineering and an MSc degree in information and telecommunication engineering from Xidian University, Xi’an, China, in 2010 and 2013, respectively. He is currently working at the China Academy of Space Technology (Xi’an), China. His research interests include satellite network switching and software-defined networking.

Ling Zheng received an MSc degree in computer science and technology in 2014, and a PhD in information and communication engineering in 2019, respectively, from Xidian University, Xi’an, China. He is currently a lecturer with the School of Communications and Information Engineering, Xi’an University of Posts and Telecommunications. His research interests include high performance switching and routing, software-defined networking, and deterministic networks.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Zhe Zhao, Zhiliang Qiu, Weitao Pan, Hanwen Sun, and Ling Zheng "Distributed competitive satellite optical burst switching mechanism for satellite networks with ultra-high link delays," Optical Engineering 61(6), 066106 (16 June 2022). https://doi.org/10.1117/1.OE.61.6.066106
Received: 26 January 2022; Accepted: 31 May 2022; Published: 16 June 2022
Advertisement
Advertisement
KEYWORDS
Switching

Satellites

Data transmission

Optical networks

Optical switching

Optical engineering

Relays

Back to Top