Mobile QR Code QR CODE

  1. (Department of Software and Communication Engineering, Hongik University, Korea danish1852@gmail.com, jsnbs@hongik.ac.kr )



WMMFS, Efficient offloading, VFC, Task placement

1. Introduction

Over the last few years, the Internet of Things (IoT) has expanded rapidly, such as in vehicles, hospitals, and household commodities. Owing to this rapid expansion in IoT devices, it is estimated that mobile traffic will reach 79 zettabytes of data in 2025, and approximately 41 billion IoT devices will be connected, as stated by the International Data Corporation (IDC) (Martinez, 2021). This boom in the volume of data from different IoT devices has created a demand for data processing closer to IoT devices to distribute the workload and minimize the response delay. Recent research has shown that cloud computing (CC) faces limitations when dealing with big data because IoT needs a robust, strong architecture that allows rapid data processing and counters latency issues. On the other hand, the centralized architecture of CC suffers from traffic congestion and storage. Several studies have examined these issues and brought the facilities of computation and storage near the IoT devices known as fog computing. Therefore, researchers (Hong, 2017), (Awaisi, 2019) have suggested that fog computing is superior to CC in dealing with latency and energy issues. Fog computing has improved latency, traffic congestion, and response time.

Fog computing is integrated with vehicular networks (VNs), resulting in vehicular fog networks (VFNs) with improved traffic safety, localization, latency, and s allocation. It has facilitated computing and communication and brings networking close to vehicles. It is more flexible and highly efficient than vehicular cloud computing (VCC), in which the servers are far away from the vehicles. Table 1 compares the VFC and VCC parameters. In VFC, Fig. 1 shows the communication between vehicles and the roadside unit (RSU). The modes of communication are vehicle-to-vehicle communication (V2V), vehicle-to-infrastructure communication (V2I), and infrastructure-to-infrastructure communication (I2I). These vehicles are resource providers and consumers that can compute and store data (Huang, 2017). The major issue in VFC is efficient task offloading to fog nodes that can reduce the delay in communication and increase real-time feedback.

Fig. 1. Vehicular Communication Architecture.
../../Resources/ieie/IEIESPC.2024.13.1.33/fig1.png
Table 1. Comparison of VFC and VCC.
../../Resources/ieie/IEIESPC.2024.13.1.33/tb1.png

The VFC architecture comprises three layers: fog, cloudlet, and cloud layers (Thakur, 2019). The fog layer includes vehicles and other connected devices, which are the coverage area of RSU. This layer is the predominant layer in VFC because it has sensing, communication, computing, and storage abilities. Sometimes, vehicle-generated data can be processed by the vehicle, and if data processing and decision-making are more complex, the vehicle sends it to the fog node. Both parked and moving vehicles near the RSU produce fog nodes for VFC. The different information sensed by vehicles is uploaded to the fog node.

The vehicles exchange messages either with each other or with fog nodes to support many important applications. These applications are partitioned into five main groups, which are as follows: (Jakubiak, 2008), (Hartenstein, 2008)

1. Safety Applications: These applications improve road safety to reduce road accidents. Examples of safety applications include the exchange of warning messages, emergency messages, weather conditions, and road conditions.

2. Traffic management applications: VFC improves traffic efficiency by avoiding traffic congestion. Fog nodes provide an efficient route to vehicles, which reduces the travel time and other resources of vehicles.

3. Infotainment and other applications: VFC provides entertainment and comfort to passengers and drivers. Online video streaming, gaming, live chatting and downloading audio and video data are examples of infotainment applications.

4. Smart grid: Fog computing also supports smart grid applications, such as smart meters that work on load balancing. The different energy sources such as wind, solar, and thermal. The smart grid will switch its meter to the cheapest energy resource.

5. Fog and IoT: IoT is the network that connects physical devices through the internet. Fog computing helps collect data from IoT devices, computes it after analysis, and provides real-time feedback to the concerned device.

Contribution:

1. The paper proposes offloading vehicular tasks to the fog node, which depends on the priority of tasks and the vehicle weight.

2. The vehicle weight is calculated based on the priority of the tasks generated by that vehicle at the dynamic time.

3. Vehicles with more tasks of higher priority are given higher weights and will have more opportunities to send their tasks to the fog node.

The manuscript is organized as follows. Section II presents the related work. The proposed scheme is outlined in Section III. Section IV presents some performance evaluation results to analyze the proposed scheme. The conclusion of the paper and its future direction are discussed in Section V.

2. Related Work

Several studies have examined fog computing in vehicular networks. The recent work is on energy efficiency in the multicasting routing protocol, which is used in various applications, such as emergency helplines, police, and firefighting (Ahmed Jawad, 2019). Multicasting is used in many vehicular networks, but a special type of energy-efficient Multicast routing protocol based on software-defined network (SDN) and fog computing for vehicular networks (EEMSFV) is used for multicasting. EEMSEV consists of four layers: the vehicle layer, the fog computing layer, open flow switches, and the SDN layer. It uses priority-based scheduling and classification algorithms to manage multicast requests based on the deadline and priority.

The integration of fog computing and vehicular networks, called VFC, has brought cloud computing to edge networks. The integration improves network performance regarding location awareness, latency, and connectivity. In addition, VFC is used to manage traffic in smart cities (Zhaolong Ning, 2019). It uses a three-layer architecture: fog, cloudlet, and cloud layers. This is because the techniques above use optimization methods for offloading schemes using moving and parked vehicles as resource providers. It reduces the response delay because the cloudlet and fog nodes are very close to the terminal.

Cloud computing does not guarantee timely computation and service access for delay-sensitive applications due to traffic congestion and long propagation delays. Therefore, multiple time constrain vehicular application scheduling (MTVS) is proposed (Chuan Lin, 2020). MTVS introduces the fog base station (FBS) and SDN architecture and divides the network into three layers instead of the centralized base station, which divides mobile data-sensitive tasks over multiple FBS. Data transmission can be optimized using linear programming techniques for transmission models. The local and fog scheduling are performed using a hybrid-scheduling algorithm. As a result, the proposed method has a reasonable success rate.

Several recent studies have used artificial intelligence (AI) for fog computation in vehicular networks. (Chenyue Zhang, 2021), (Mugen Peng, 2020), (Abir Mchergui, Vehicular Communications) Recently, AI has been used in several application domains because of its potential to enhance traditional data-driving methods. It has enhanced high visual videos and images, resolved different issues of industries, augmented reality (AR), self-driving cars, and massive sensors on the earth that carry their data to the satellites. A reliable and interference-free mobility management algorithm (RIMMA) is proposed (Networking, 2020) for fog computing in vehicular networks to provide such high visual graphics. The algorithm improves communications among the vehicles and base units, computation power, cooperation, and storage issues. The algorithm is self-adaptive, intelligent, and highly efficient in speedy vehicles and is delay tolerant.

Tasks offloading and handover of data with base stations are some prominent issues. A previous study (Salman Memon, 2018) used machine-learning techniques, such as recurrent neural networks (RNN), which can learn the latency and cost of the path. The RNN uses a three-layer model to predict the correct fog node for data sending. This algorithm reduces the interruption in handover between the fog node and vehicles and maintains a smooth transition of the connection of the vehicles.

3. Proposed Scheme

This study considered VFN, in which vehicles constantly offload their tasks to fog nodes for computation, as shown in Fig. 2. The vehicular tasks are divided into three types according to their priorities: high-, medium-, and low-priority tasks, as shown in Table 2. Based on the task priority, the fog node provides an equal opportunity for every vehicle to place its tasks on the fog node. Therefore, every vehicle can offload its tasks to the fog node. The vehicular tasks of high priority are placed first. Tasks with medium priority are placed next, and the application task with low priority is placed last. This process is repeated accordingly (Feng, 2017). The high-priority application tasks are the most important for the life of the driver and passenger. These tasks are executed without delay. The medium priority application tasks are driving-related applications, which are optional, such as routing and information service of drivers. In these types of applications, delay and failure can cause inconvenience to the drivers and passengers. The low-priority tasks are not important for drivers and passengers. These include entertainment and playing video games. The placement of tasks on fog nodes is based on the priority of the tasks. Fog nodes adopt the WMMFS algorithm to provide an equal opportunity for every vehicle to offload their tasks. The novelty of the proposed work is to evaluate the weight of the vehicles according to the priority of their tasks and use the WMMFS algorithm by the fog node to offload their tasks. Vehicles generating high-priority tasks will receive more resource allocation in the fog node, while vehicles with low-priority tasks will receive less resource allocation in the fog node.

Fig. 2. Proposed work Architecture.
../../Resources/ieie/IEIESPC.2024.13.1.33/fig2.png
Table 2. Task priorities.
../../Resources/ieie/IEIESPC.2024.13.1.33/tb2.png

3.1 WMMFS Algorithm

WMMFS is a generic fair queuing technique. This type of technique shows how a resource is shared among different users. Let the resource be a server with a capacity C, and users are different sources 1,2,3,……n, each weighting W1, W2,…...Wn. The demands of the users are r1, r2, r3,…rn. The algorithm maximizes the minimum share of the sources according to their weights.

In this algorithm, the server resources are allocated in the order of the increasing demands of the sources. Sources cannot receive a resource share larger than their demand, and every source should be given a resource share according to their assigned weights. The weights are normalized using Eq. (1).

(1)
$ W_{norm}=\frac{(W-W_{\min })}{W_{\max }-W_{\min }} $

where Wnorm is the normalized weight of the source; W is the weight to be calculated from task priorities; Wmax is the given maximum weight among all sources; Wmin is the given minimum weight among all sources.

In Eq. (2), $W_{t(norm)}$is the summation of all normalized weights of the vehicles summed by the fog server.

(2)
$ W_{t(norm)}=W_{1}+W_{2}+W_{3}+\ldots +W_{n} $

In Eq. (3), D represents the total demand of data the user wants to offload, which is calculated using the RSU after fixed rounds.

(3)
$ D=r_{1}+r_{2}+r_{3}+\ldots +r_{n} $

In Eq. (4), Si denotes the allocation of the resources to different nodes by the fog node. Table 3 lists the notation of the different symbols.

(4)
$ S_{i}=\frac{W_{norm}}{W_{t(norm)}}\times C $
In the first round, some nodes are allocated more than their needs while others are in deficit. The nodes with the excess allocation will share their access allocation among those with a deficit for resource allocation. This process is repeated until all nodes have sent their data to the server.

Table 3. Notations used in this paper.
../../Resources/ieie/IEIESPC.2024.13.1.33/tb3.png

Data offloading is easily understood from the queuing model, as shown in Fig. 3. Each source sending data to the fog server has been assigned a queue. The size of the queue depends on the weights of the sources. The source with the higher weights should be assigned a queue with a larger size, and vice versa. Each queue is sending packets from the source to the server. Let Ai,jbe the packet transmitting through the queue where ith is the queue number, and jth is the packet number entering the queue. Every packet in these queues should be assigned a packet finishing number. The packet finishing number is the service round in which packet Ai,j would finish its service. The finishing time of a packet depends on the packet size and the round number in which the new packet service would start.

Fig. 3. Queuing model of the fog node.
../../Resources/ieie/IEIESPC.2024.13.1.33/fig3.png

Let Li,j represent the size of packet Ai,j. Fi,j is the time when the source finishes its transmitting packets. Suppose a packet reaches its server in time ta that was previously cycled in the queue through R(ta) rounds, as shown in Eq. (5). The packet should wait for service if there is any packet in the current queue. The packets in another queue will not affect the service of packets. The finishing number of the packet is calculated using the given formula. Once the finish number is assigned to a packet, it remains constant.

(5)
$ F_{i,j}=\max \{F_{i,j}-1,R(t_{a})\}+L_{i,j} $

3.2 Calculating the Weights of the Vehicles based on the Task Priority

This section explains the calculation of the weights of vehicles from the priority of their tasks to ensure the fair placement of tasks on the fog node. The proposed work provides an equal opportunity for every vehicle to place tasks on the fog node based on the task priority and vehicle weight. Each task has given its weight or priority, such as high, medium, and low. The fog node will give resources for task placement based on this weight. The source with a high weight of tasks will be given more priority for resource allocation in the fog node. In addition, the source with less weight will find a minor resource for task placement on the fog node.

Suppose several vehicles V = {V1, V2, V3, …, Vn} generate their tasks and offload them to the fog node. The vehicle weight is calculated at different rounds from the priority of the tasks. Let every vehicle generate ten tasks of random priority in the first cycle. In each cycle, these tasks are added based on priority, and the weight of that individual vehicle is the summation of their added priority. The vehicles with higher weight will place more tasks on the fog node. The vehicles with lower priority weights will acquire minimum resources for task placement and further computation of tasks.

Eq. (1) represents the number of vehicles in a matrix that want to offload their task to the fog node.

(6)
$V=\begin{pmatrix} v_{11} & \ldots & v_{1n}\\ \vdots & \ddots & \vdots\\ v_{m1} & \cdots & v_{mn} \end{pmatrix}$

Each vehicle generates its tasks and offloads them to the fog node. The tasks generated by the vehicles have different priorities, such as high, medium, and low. Each vehicle is assigned its weight based on the priorities of the tasks generated by the vehicle. Let vehicle V1 generate different tasks {T11, T12, and T13}, and their priorities are high, medium, and low, respectively. Table 2 lists the assigned value to the priorities.

In Eq. (7), the weight would be assigned to each vehicle according to the priorities of their generated tasks.

(7)
$ T_{1p}=T_{11}+T_{12}+T_{13}+\ldots +T_{1n} $

where Ti,p is the total task priorities of the individual vehicle; i is the number of vehicles; n is the task number.

Eq. (8) expresses the summation of tasks generated from vehicle i, and the summation of tasks is the actual weight of the vehicle.

(8)
$ T_{i,p}={\sum }_{i=1}^{n}v_{i,n} $

In addition, the proposed work has an intelligent vehicular fog server (Quang Duy La, 2019) in which the vehicles and fog nodes know the parameters of one another, such as the capacity C of the fog node and demand D of the data that the vehicles intend to offload to the fog node. The offloading data and placing it on the fog node are shown in the following steps.

1. The vehicles should find their weight according to Eq. (8). After assigning the weight to the individual vehicles, offloading data starts from the vehicles to the fog node. Offloading the data from vehicles depends on the capacity of the fog node, the weight assigned to the vehicles, and the demand for the data that vehicles intend to offload to the fog node.

2. Fair offloading of tasks among vehicles is maintained using the WMMFS algorithm, as discussed in Section 3.1.

3. The fog node assigns the queue to every vehicle that intends to send data to the fog node. The assigned queue size depends on the vehicle weight in Eq. (9).

(9)
$ F_{i,j}=\max \left\{F_{i.j-1},R\left(t_{a}\right)\right\}+\frac{L_{i,j}}{w_{i}} $

where Fi,j is the finishing time, Li,j is the size of the packet, and wi is the weight of the individual vehicle, as discussed in Section 3.1.

4. The fog node should check the demands of the data that the vehicles intend to send to the fog node. If the demand exceeds the capacity of the fog node, the fog node starts applying the WMMFS algorithm. Otherwise, the vehicles can send their data to the fog node immediately.

5. After comparing the demand and capacity of the fog node, each vehicle starts to calculate its weight from task priority as defined in Eq. (8).

6. Normalize all weights of the vehicles if they have some of the decimal points.

7. The RSU should sum up the total weight of that particular vehicle in the coverage area of the RSU according to Eq. (2).

8. According to the weight of the vehicles, the fog node gives the share to every vehicle according to Eq. (4).

9. This process is repeated several times until all nodes send their data.

Fig. 4. Flow chart of the proposed work.
../../Resources/ieie/IEIESPC.2024.13.1.33/fig4.png
Fig. 5. Flow chart for task placement.
../../Resources/ieie/IEIESPC.2024.13.1.33/fig5.png
Table 4. Simulation parameters and their values.

Parameter

Value

Number of vehicles

10-30

Tasks sizes

50-250MB

Deviation in task sizes

Uniform [0,50]

Fog node capacity

400 MB per cycle

Tasks priorities

High = 3

Medium = 2

Low = 1

4. Performance Evaluation

A fog network simulation setup was developed in Matlab, and the proposed algorithm for task offloading scenarios was implemented. The offloading tasks were divided into high, medium, and low based on their priorities. Table 3 lists the values of the critical parameters. The simulation setup consisted of one fog node, and the number of vehicles ranged from 10 to 30 across a 100m ${\times}$ 100m area. The offloading task sizes ranged from 50 to 250 MB, and a 50MB difference between each simulation iteration was used. The fog node had a capacity of 400 MB per cycle.

Each vehicle had an uplink with the fog node with a bandwidth of 10 MB. Here, this study considered the PCS-1900 GSM band, with free space loss between the vehicle dm and fog node fn as PLm,n = 38.02 + 20 log dm,n (Swain, 2021). Each task was given a proper number according to its priority. The performance of the proposed method was evaluated comparatively in terms of the efficiency of task offloading to the fog node by the successful placement of different tasks on the fog node when several vehicles were offloading their tasks simultaneously to the fog node. The evaluation criteria of the proposed work were how the fog node provides a fair opportunity to every vehicle offloading its tasks by adopting the WMMFS algorithm and how it behaves when tasks are offloading from the vehicle to the fog node by a random algorithm. Different simulations were conducted between the vehicles and fog nodes using a random selection algorithm and the WMMFS algorithm.

Fig. 6 presents a simulation scenario in which ten vehicles send their application tasks to the fog node through a random selection algorithm and proposed algorithm. The graphs show that in the random selection algorithm, the first five vehicles send their data to the fog node and successfully place their tasks on it. The remaining five vehicles cannot send their data to the fog node because all the available resource is occupied by the first five vehicles, and there is no resource to accommodate the newly arriving tasks from the remaining five vehicles. In contrast, the proposed algorithm allows all the vehicles to place their tasks on a fog node while considering the priority of the tasks. Hence, it provides a fair opportunity for every vehicle to send their tasks to the fog node.

Fig. 7 presents the influence of the number of vehicles on the task allocation on the fog node. In general, the standard deviation SD of the task size allocation at the fog node decreases as the number of vehicles increases because it allows every vehicle to share its tasks with the fog node in the fairway. The comparison between the proposed offloading scheme and the random offloading scheme showed that the proposed offloading scheme had less deviation than the random offloading scheme because the proposed offloading scheme facilitated fair sharing of the task to the fog node, and every vehicle had an opportunity to send their task to the fog node. When ten vehicles were offloading their tasks to the fog node, the SD of the proposed scheme was almost 14, whereas the SD of the random offloading scheme was almost 46, showing an almost 32 difference. Hence, the proposed offloading scheme provides every vehicle the opportunity to offload its task to the fog node.

Fig. 8 explains the results in the variation of task sizes and their standard deviation of task size allocation in the fog node. In the simulation, the deviation of the random offloading scheme was increased when the task size was increased. This shows that some vehicles could offload their tasks to the fog node while others were still waiting despite having priority tasks. In contrast, the proposed offloading scheme shows significant results. As the task size increased, every vehicle was given a fair opportunity to offload its task to the fog node. When the task size reached 250 MB, the deviation of the random offloading scheme was almost 90, whereas the deviation from the proposed offloading scheme was almost 10. Hence, the proposed scheme is a fair offloading scheme that provides equal opportunity to every vehicle based on the task priority.

Fig. 6. Vehicular task offloading.
../../Resources/ieie/IEIESPC.2024.13.1.33/fig6.png
Fig. 7. Comparison of the number of vehicles and their deviation for the proposed and random scheme.
../../Resources/ieie/IEIESPC.2024.13.1.33/fig7.png
Fig. 8. Comparison of the task sizes with the standard deviation of task sizes in the fog node in the proposed and random offloading scheme.
../../Resources/ieie/IEIESPC.2024.13.1.33/fig8.png

5. Conclusion and future work

VFC provides safety to vehicles, traffic management, and infotainment services for users. It uses different algorithms to offload the data efficiently. Nevertheless, VFC has many challenges, such as data offloading to the fog node, efficient channel utilization, mobility of vehicles moving with high speed, real-time feedback, and faster response according to priority.

This study examined efficient task offloading to the fog node. This study adopted the WMMFS algorithm, which provides equal opportunity to every vehicle sending data to the fog node. Therefore, in this algorithm, the fog node shares its equal resources among all connected vehicles. These vehicles use the fog node to calculate the tasks, storage, and filtering of some information.

Sometimes, a faster response in VFC is needed, such as when an obstacle is encountered on a fast-moving track, or there is an accident on a high-speed road. In these cases, it is necessary to inform all the vehicles promptly. If the vehicles are not informed immediately, it can cause a huge accident that can damage many vehicles and kill and injure many people. Therefore, it is essential to prioritize these types of tasks and avoid these types of accidents. Accordingly, the WMMFS algorithm can help improve safety conditions because with the WMMFS algorithm, each task is given a weight, and their computation and response are performed based on this weight. Hence, the task with the high weight is done first, and the other tasks are completed in the order of their increasing weight.

Future studies will address these problems and provide more safety and care in the practical design of vehicular networks.

ACKNOWLEDGMENTS

This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No.2022R1A2C1003549) and in part by the 2023 Hongik University Innovation Support Program Fund.

REFERENCES

1 
I. Martinez, A. S. Hafid and A. Jarray, "Design, Resource Management, and Evaluation of Fog Computing Systems: A Survey," in IEEE Internet of Things Journal, vol. 8, no. 4, pp. 2494-2516, 15 Feb.15, 2021DOI
2 
H. -J. Hong, "From Cloud Computing to Fog Computing: Unleash the Power of Edge and End Devices," 2017 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 2017, pp. 331-334DOI
3 
C. Huang, R. Lu and K. R. Choo, "Vehicular Fog Computing: Architecture, Use Case, and Security and Forensic Challenges," in IEEE Communications Magazine, vol. 55, no. 11, pp. 105-111, Nov. 2017DOI
4 
K. S. Awaisi et al., "Towards a Fog Enabled Efficient Car Parking Architecture," in IEEE Access, vol. 7, pp. 159100-159111, 2019DOI
5 
A. Thakur and R. Malekian, "Fog Computing for Detecting Vehicular Congestion, an Internet of Vehicles Based Approach: A Review," in IEEE Intelligent Transportation Systems Magazine, vol. 11, no. 2, pp. 8-16, Summer 2019DOI
6 
J. Jakubiak and Y. Koucheryavy, "State of the Art and Research Challenges for VANETs," 2008 5th IEEE Consumer Communications and Networking Conference, 2008, pp. 912-916DOI
7 
H. Hartenstein and L. P. Laberteaux, "A tutorial survey on vehicular ad hoc networks," in IEEE Communications Magazine, vol. 46, no. 6, pp. 164-171, June 2008DOI
8 
Kadhim , Ahmed Jawad, and Seyed Amin Hosseini Seno. "Energy-efficient multicast routing protocol based on SDN and fog computing for vehicular networks." Ad Hoc Networks~84 (2019): 68-81DOI
9 
Z. Ning, J. Huang and X. Wang, "Vehicular Fog Computing: Enabling Real-Time Traffic Management for Smart Cities," in IEEE Wireless Communications, vol. 26, no. 1, pp. 87-93, February 2019DOI
10 
C. Lin, G. Han, X. Qi, M. Guizani and L. Shu, "A Distributed Mobile Fog Computing Scheme for Mobile Delay-Sensitive Applications in SDN-Enabled Vehicular Networks," in IEEE Transactions on Vehicular Technology, vol. 69, no. 5, pp. 5481-5493, May 2020DOI
11 
C. Zhang, W. Li, Y. Luo and Y. Hu, "AIT: An AI-Enabled Trust Management System for Vehicular Networks Using Blockchain Technology," in IEEE Internet of Things Journal, vol. 8, no. 5, pp. 3157-3169, 1 March1, 2021DOI
12 
M. Peng, T. Q. S. Quek, G. Mao, Z. Ding and C. Wang, "Artificial-Intelligence-Driven Fog Radio Access Networks: Recent Advances and Future Trends," in IEEE Wireless Communications, vol. 27, no. 2, pp. 12-13, April 2020DOI
13 
Mchergui, Abir, Tarek Moulahi, and Sherali Zeadally. "Survey on artificial intelligence (AI) techniques for vehicular ad-hoc networks (VANETs)."~Vehicular Communications 34 (2022): 100403.DOI
14 
Z. Jiang, S. Fu, S. Zhou, Z. Niu, S. Zhang and S. Xu, "AI-Assisted Low Information Latency Wireless Networking," in IEEE Wireless Communications, vol. 27, no. 1, pp. 108-115, February 2020DOI
15 
Memon , Salman, and Muthucumaru Maheswaran. "Using machine learning for handover optimization in vehicular fog computing." In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pp. 182-190. 2019DOI
16 
J. Feng, Z. Liu, C. Wu and Y. Ji, "AVE: Autonomous Vehicular Edge Computing Framework with ACO-Based Scheduling," in IEEE Transactions on Vehicular Technology, vol. 66, no. 12, pp. 10660-10675, Dec. 2017DOI
17 
Marsic, Ivan. "Computer networks: Performance and quality of service." (2013)URL
18 
C. Swain et al., "METO: Matching-Theory-Based Efficient Task Offloading in IoT-Fog Interconnection Networks," in IEEE Internet of Things Journal, vol. 8, no. 16, pp. 12705-12715, 15 Aug.15, 2021DOI
Ihsan Ullah
../../Resources/ieie/IEIESPC.2024.13.1.33/au1.png

Ihsan Ullah received his B.S. degree in Computer Systems engineering from the University of Engineering and Technology Peshawar, Pakistan, and his M.S. degree in computer engineering with a specialty in computer and Wireless Networks from the Department of Electrical and Computer Engineering, COMSATS University, Islamabad, Pakistan in 2021. He worked as a research assistant in the Wireless and Communication laboratory for six months. Currently, he is doing his Ph.D. in the Department of Software and Communication Engineering at Hongik University, South Korea, under the supervision of Prof. Byung-Seo Kim. His current interests are in NDN, Underwater Wireless Sensor Networks (UWSN), Cloud computing, Fog Computing, Vehicular Networks, Machine learning, and Artificial intelligence.

Byung-Seo Kim
../../Resources/ieie/IEIESPC.2024.13.1.33/au2.png

Byung-Seo Kim received his B.S. degree in Electrical Engineering from In-Ha University, In-Chon, Korea, in 1998 and his M.S. and Ph.D. in Electrical and Computer Engineering from the University of Florida in 2001 and 2004, respectively. His Ph.D. study was supervised by Dr. Yuguang Fang. Between 1997 and 1999, he worked for Motorola Korea Ltd., PaJu, Korea, as a CIM Engineer in ATR&D. From January 2005 to August 2007, he worked for Motorola Inc., Schaumburg Illinois, as a Senior Software Engineer in Networks and Enterprises for designing the protocol and network architecture of wireless broadband mission-critical communications. He is currently a professor in the Department of Software and Communications Engineering Hongik University, Korea. He is an IEEE Senior Member and is an Associative Editor of IEEE Access, Telecommunication Systems, and Journal of the Institute of Electrics and Information Engineers. His works have appeared in around 260 publications and 32 patents. His research interests include designing and developing efficient wireless/wired networks, including link-adaptable/cross-layer-based protocols, multi-protocol structures, wireless CCNs/NDNs, Mobile Edge Computing, physical layer design for broadband PLC, and resource allocation algorithms for wireless networks.