Mobile QR Code QR CODE : The Transactions P of the Korean Institute of Electrical Engineers
The Transactions P of the Korean Institute of Electrical Engineers

Korean Journal of Air-Conditioning and Refrigeration Engineering

ISO Journal TitleTrans. P of KIEE
  • Indexed by
    Korea Citation Index(KCI)




IOV (I/O virtualization), Paravirtualization, virtio, SR-IOV

1. Introduction

Virtualization technology has been widely used in IT (Infor- mation Technology) industry. The common theme is decoupling the logical from the physical, introducing a level of indirection between the abstract and the concrete [1]. Virtual memory is a technology to give illusion of larger and continuous memory space to applications, hiding and abstracting the physical memory space. Operating system (OS) can handle multiple concurrent applications over its limited hardware resource using multiplexing and give single logical view by aggregating multiple distinct file systems. Storage subsystems can be virtualized by RAID controllers and storage arrays, which present the abstraction of multiple (virtual) disks to the operating systems, which address them as (real) disks.

Virtualization architecture typically consists of three layers in high level, virtual machine, virtualization software and underlying hardware in Fig. 1. Virtual machine is a software abstraction that behaves as a complete hardware computer with its own virtualized processors, memory and I/O devices. A virtualization software, called as a (or VMM : Virtual Machine Manager) provides the level of indirection that decouples an operating system and its applications from physical hardware, being responsible for managing and running virtual machines. The term is commonly used to distinguish the layer of software running within a virtual machine. A guest operating system manages applications and virtual hardware, while a hyper- visor manages virtual machines and physical hardware [1].

The hypervisor is a key component and there are two types of hypervisor. Type-1 hypervisor is a separate software component that runs directly on the host's hardware to control the hardware and to manage guest operating systems, providing a proper abstraction to virtual machines running on top of the hypervisor. It is sometimes called as a bare-metal hypervisor. Type-2 hyper- visor runs on a conventional operating system just as other computer programs do. A guest operating system runs as a process on the host. It abstracts guest operating systems from the host operating system.

Virtualization technology has been applied to many levels in computing area, such as application virtualization, server virtuali- zation, desktop virtualization, network virtualization, storage virtu- alization, etc. Also, it can be categorized as implementation techniques, such as guest operating system virtualization, shared kernel virtualization, kernel level virtualization, hypervisor virtu- alization, etc.

As virtualization is a broad topic and the universe of I/O devices is large and diverse, this article focuses on I/O virtualization in virtualization environment, primarily in the context of single physical host, exploring various implementation approaches and techniques that have been leveraged to enable flexible, high performance I/O virtualization.

그림. 1. 기본 가상화 구조

Fig. 1. Basic virtualization architecture

../../Resources/kiee/KIEEP.2019.68.3.147/fig1.png

2. Physical and virtual I/O operations

There are many benefits for I/O virtualization by decoupling virtual machine’s virtual I/O devices from its physical imple- mentations. The first one is better I/O device utilization by multiplexing, which allows multiple virtual devices to be imple- mented by smaller number of physical devices. The second one is seamless portability by flexible mapping between virtual and physical devices, which makes virtual machines portable across physical servers with different yet semantically compatible inter- faces. The third one is live migration of a virtual machine across physical servers by encapsulating the entire state of the virtual machine with that of its virtual I/O devices by hypervisors (suspend and resume the I/O operations across the servers). The fourth one is dynamic decoupling/recoupling between the virtual and the physical by hypervisor, which allows the physical I/O devices to be upgraded, reconfigured or modified while the virtual machines are running. The fifth is an ability to provide a single, superior virtual device to virtual machines, by aggregating multiple physical devices by hypervisor. Also, new features that are not supported by physical devices can be provided by hypervisor. Most of the benefits mentioned above depend on hypervisor’s interposition capability.

The I/O that is generated and consumed by virtual machines is called as virtual I/O, as opposed to as physical I/O, which is generated and consumed by the operating system that controls the physical hardware.

2.1 Physical I/O operation

The physical I/O operation between operating system (which runs on CPU (Central Processing Unit)) and I/O devices has three ways, which are I/O requests/commands from operating system, asynchronous event delivery (i.e., interrupt) from I/O device, and data movement between them via DMA (Direct Memory Access), described in Fig. 2, operating the below.

1) Operating system running on CPU puts its requests/commands to an I/O device by setting the registers of the I/O device via PMIO (Port-mapped I/O) or MMIO (Memory-mapped I/O).

2) The I/O device responds/notifies to the operating system with asynchronous interrupt, letting the operating system do something for it.

3) Massive data movement is done by reading/writing from/to shared memory via DMA.

그림. 2. CPU와 I/O 장치간의 동작

Fig. 2. I/O device interactions with CPU

../../Resources/kiee/KIEEP.2019.68.3.147/fig2.png

As an example of transmitting data from operating system via an NIC (Network Interface Card), operating system puts the data into the shared memory and ask the NIC to transmit by setting the NIC’s registers with the location of the shared memory, then, after getting the data via DMA and transmitting it, the NIC triggers an interrupt to the operating system to inform the result so that the operating system handles it (if interrupt is enabled). For receiving data via network interface, the NIC stores the data into the shared memory via DMA and triggers an interrupt to the operating system (if it is enabled). Data movement between NIC and operating system via DMA is typically implemented with producer/consumer ring buffer. NIC supposes to employ at least one Tx (Transmit ring) and one Rx (Receive ring) per port and multiple Tx/Rx rings per port give benefits for scalability and performance as different rings can be easily handled concurrently by different cores.

2.2 Virtual I/O operation

I/O operation in virtualization environment traverses two separate I/O stacks, in virtual machine and hypervisor respectively in Fig. 3 [1], operating the below sequence.

그림. 3. 가상화 환경에서 I/O의 처리

Fig. 3. Processing an I/O in virtualization environments

../../Resources/kiee/KIEEP.2019.68.3.147/fig3.png

1) When an application running within a virtual machine issues an I/O request (typically using a system call), it is initially processed by I/O stack in the guest operating system and the device driver in the operating system issues the request to virtual I/O device.

2) Hypervisor intercepts it and schedules requests from multiple virtual machines onto the underlying physical I/O device, usually via another device driver managed by the hypervisor or a privileged virtual machine with direct access to the physical device.

3) When the physical device finishes processing the I/O request, the two I/O stacks are traversed again in the reverse order. The actual device posts a physical completion interrupt, which is handled by the hypervisor. The hypervisor deter- mines which virtual machine is associated with the completion and notifies it by posting a virtual interrupt for the virtual device managed by the guest operating system. To reduce overhead, some hypervisors perform virtual interrupt coa- lescing in software, similar to the hardware batching optimi- zations found in physical devices, which delay interrupt delivery with the goal of posting only a single interrupt for multiple incoming events.

Traversing two separate I/O stacks affects both latency and throughput and imposes additional CPU load. Interposition by hypervisor gives certain benefits, mentioned in Section 2. However, it can incur additional overhead by manipulating I/O requests, resulting in fewer cores available for running virtual machines. The hypervisor also needs to multiplex limited physical hardware across multiple virtual machines, which may result in scheduling delays for some virtual machines [1].

There are two ways of I/O virtualization implementation depen- ding on whether the physical device drivers are embedded in the hypervisor or not, which are hypervisor-based I/O virtualization (or direct driver model) and hosted I/O virtualization (or indirect driver model), described in Fig. 4. In hypervisor-based I/O virtualization, hypervisor includes the drivers as a part of it, providing high performance I/O by implementing an efficient path of I/O requests from applications running in virtual machines down to physical devices. As the hypervisor processes the I/O requests directly, it doesn’t require any context switching, resulting in latency reduction. However, it needs to re-implement the drivers, which requires certain engineering cost. VMWare ESX Server is in this model.

그림. 4. 하이퍼바이저 기반 가상화와 호스트 기반 가상화

Fig. 4. Hypervisor-based vs. hosted I/O virtualizations

../../Resources/kiee/KIEEP.2019.68.3.147/fig4.png

In contrast, in the hosted I/O virtualization, hypervisor doesn’t include device drivers, instead, it relies on a district, schedulable entity, called “dom0” for Xen or QEMU for KVM, which runs with elevated privileges and in particular the ability to run physical device drivers of a standard operating system. It has the advantages of simplicity and portability as device drivers run in a standard operating system such as Linux (for Xen and KVM) or Windows (for Virtual Server) [2]. This article doesn’t differ- entiate the ways, regarding the privileged virtual machine as a part of hypervisor.

3. Software-based I/O virtualization

There are two types of software-based I/O virtualization without requiring specific hardware features, full I/O virtualization and I/O paravirtualization, giving certain illusion to virtual machines that the physical devices can be accessed directly. Virtio is a way of the I/O paravirtualization as a de facto standard. The implementation details are different upon hypervisors (Xen, KVM, VMWare’s, etc) but the basic concept is same.

3.1 Full I/O virtualization

A guest operating system runs on top of hypervisor that sits on the underlying hardware. The guest operating system is unaware that it is being virtualized and doesn’t require any changes to work in this configuration, simply believing it has exclusive control on the I/O devices. As the hypervisor can not allow the guest to control the physical I/O devices exclusively, providing logical I/O devices to the virtual machine, the hypervisor needs to “trap” all the traffic coming from the guest driver and “emulate” the physical I/O devices to ensure compatibility with processing the actual I/O operations over the physical I/O devices (), described in Fig. 5.

그림. 5. I/O 전가상화와 반가상화

Fig. 5. I/O full and paravirtualization

../../Resources/kiee/KIEEP.2019.68.3.147/fig5.png

The emulation to ensure compatibility needs to handle proces- sor’s I/O instructions, uncached load and store access to I/O device address, DMA, interrupts, etc., for multiple virtual machines. It causes lots of complexities and workloads in the hypervisor side. This approach gives great flexibility as modifications in guest operating system are not required, however, it introduces inefficiency and high complexity, resulting in poor performance and smaller number of virtual machines especially in server virtualization environment.

3.2 I/O paravirtualization

One of ways to reduce the overhead of the emulation is to optimize the communication between the virtual machine and device emulation. It requires the guest operating system is made aware that it is running in virtualized environment and special drivers are loaded into the guest to take care of the I/O oper- ations.

In modern operating system environments such as Windows and Linux, it is possible to install device drivers that communi- cate the request’s arguments to the hypervisor’s device emulation code directly via ” with minimal overhead. The system calls for I/O operations get replaced by the hypercalls. The system call lets a user application communicate to the operating system that it needs to execute a command with higher priority than normal. In same way, the hypercall is a way for the virtualized operating systems to make the hypervisor handle pri- vileged operations. This approach of using virtual hardware opti- mized for the virtualization layer rather than matching any particular real device is referred to as paravirtualization, described in Fig. 5. In practice, most modern virtualization platforms support an emulated legacy device for compatibility, as well as providing an optional paravirtual device for higher performance [1].

In this configuration, the driver in the guest side is called as a front-end driver and the host side’s is called as a back-end driver. The front-end driver in the guest virtual machine accepts I/O requests from user process, then, forwards the requests to its back-end driver. The back-end driver gets the I/O requests from the front-end driver, then, handles the requests via native driver of the physical I/O device. These front-end and back-end drivers are where  comes in and the virtio provides a standardized interface for the development of emulated device access to propagate code reuse and increase efficiency.

3.3 Virtual I/O driver (virtio)

Virtio was chosen to be the main platform for I/O virtuali- zation in KVM as a common (de facto standard) framework for hypervisors, initiated by Rusty Russell from his article [3]. Virtio is an abstraction for a set of common emulated devices in a paravirtualized hypervisor. This design allows the hypervisor to export a common set of emulated devices and make them avail- able through a common application programming interface (API). 

With a paravirtualized hypervisor in Fig. 5, the guests implement a common set of interfaces, with the particular device emulation behind a set of back-end drivers. The back-end drivers don’t need to be common as long as they implement the required behaviors of the front-ends.

For KVM, the device emulation occurs in user space using QEMU, so the back-end drivers communicate into the user space of the hypervisor to facilitate I/O through QEMU. QEMU is a system emulator that, in addition to providing a guest operating system virtualization platform, provides emulation of an entire system (PCI host controller, disk, network, video hardware, USB controller, and other hardware elements) [4].

In between the front-end drivers and back-end drivers, virtio defines a transport layer to support guest-to-hypervisor communi- cations, which conceptually attaches the front-end drivers to the back-end drivers, described in Fig. 6. The transport layer is abstracted as a which is a part of memory of guest operating system and a communication channel between the drivers. The is an implementation and memory layout of the abstraction.

그림. 6. Virtio 구조 및 동작방식

Fig. 6. High level virtio architecture and its operation

../../Resources/kiee/KIEEP.2019.68.3.147/fig6.png

There are five APIs defined in [3], which are (to add a new buffer to the queue), (to notify the other side, i.e, the host, when buffers have been added), (to get a used buffer), (to enable call back process when the hypervisor consumes the buffer, like interrupt), and (to disable the call back process). For the simple scenario for block read request, like disk, the front-end driver calls for empty buffer allocation with its request, then, calls to inform the hypervisor there is a pending request. After the hypervisor processes the request and the buffer is filled, the guest is notified (i.e, interrupted). Then, the guest can get the data using .

There were six device types in virtio version 1.0 ( , , and ) and four more devices are added in version 1.1 (, , , and ) [5]. The virtio 1.1 is fully backward compatible, introducing the packed virtqueue optimization for higher performance via less PCI reads/writes and capability bits essential for hardware negotiation for IOMMU (I/O Memory Management Unit) and DMA barrier [6].

Virtio is a part of the standard Linux library of useful virtualization functions and is normally included in most versions of Linux. The virtio guest drivers are also available in Windows (from KVM [7] or Fedora committees [8]), Linux (2.6.x, 3.x, 4.x), FreeBSD 9.x 10.x, and OpenBSD 5.9+ [9].

4. I/O virtualization with hardware support

The I/O traffic required by virtual machines has been increased a lot as more applications require more I/O traffic especially for network and storage. Software-based I/O processing by hypervisor has lots of benefits, but, it requires more CPU resources to process and it results in less CPU resources to allocate for virtual machines. Moreover, hypervisor’s processing capability is hard to scale up even with more CPU resources. There should be a way to avoid the performance issue of software-based solution and to reduce the virtualization overheads.

4.1 Device pass-through

One of the simple ideas to overcome the performance issue of software-based I/O virtualization is to assign physical I/O device to a virtual machine so that the virtual machine can access it exclusively, in Fig. 7. It is also called as PCI (Peripheral Component Interconnect) pass-through as most I/O devices are based on PCI technology. It is an effective solution for a virtual machine with heavy I/O operations, and some I/O devices are typically non-sharable, such as video adapters or serial ports. With this technology, a virtual machine can get almost same performance as bare metal’s. The drawbacks are that the I/O device can no longer be shared among multiple virtual machines and it limits the benefits of interposition mentioned in the previous Section 2, such as the live virtual machine migration. More seriously, a virtual machine with dedicated physical I/O device can access entire physical memory using DMA operations, while a part of the memory may be allocated to other virtual machines or hypervisor. This vulnerability issue can be protected by IOMMU described in the following section.

그림. 7. 장치 패스스루

Fig. 7. Device pass-thoughs

../../Resources/kiee/KIEEP.2019.68.3.147/fig7.png

4.2 IOMMU (I/O Memory Management Unit)

An address visible in a processor (virtual address) is different from a physical memory address (physical address) and typically the virtual address space is bigger than the physical address space. To access physical address with virtual address, there should be a way to translate the virtual address to the correspon- ding physical address and it is done by MMU (Memory Manage- ment Unit). MMU translates the address based on the that is typically maintained by operating system.

Likewise, IOMMU is a memory management unit for DMA capable I/O devices to connect to the physical memory address, by mapping the I/O address visible in the I/O devices to the physical memory address in Fig. 8. It is useful for I/O device not only to use continuous I/O addresses with fragmented physical addresses but also to access larger physical address space with smaller I/O address space (64-bit physical address space accessed by 32-bit address I/O device for example). By exclusively con- trolling the memory mapping in IOMMU by operating system, it can be protected from DMA attacks (malicious or faulty I/O device tries to access/corrupt physical memory using DMA as DMA uses physical memory address and its behavior can be programmed).

In virtualization environment, hypervisor defines the amount of its physical memory that is available to the virtual machine (guest). Virtual machine uses the allocated amount as its ss space, then, its applications use their over the . The mapping between and is managed by guest operating system with its and the mapping between and is managed by hypervisor with its . To translate to then to , two in guest and hypervisor are used by MMU. Likewise, IOMMU needs to access to with device’s , which requires to mapping.

그림. 8. IOMMU와 MMU

Fig. 8. IOMMU and MMU

../../Resources/kiee/KIEEP.2019.68.3.147/fig8.png

Direct I/O device access by a virtual machine, like device pass-through, causes some issues, DMA attacks for example. As an I/O device is allocated to a virtual machine, the virtual machine operating system tries to use the DMA of the I/O device with its , but, it is not the real and the I/O device doesn’t know the mapping between the and the . IOMMU can solve it by remapping (DMA remapping) the to , utilizing the used by MMU for to translation. Similar with DMA remapping, interrupt remapping is needed to translate interrupt vectors fired by I/O devices based on an interrupt remapping table configured by the hypervisor [2], allowing the interrupts to be intercepted and routed to the assigned vector on the assigned processor (core).

AMD’s implementation of IOMMU is called as AMD-Vi [10] and there are also similar features as VT-d (Virtualization Tech- nology for Direct I/O) for Intel [11] and SMMU (System Memory Management Unit) for ARM [12]. PCI-SIG (Peripheral Component Interconnect Special Interest Group) has relevant work under the terms I/O Virtualization (IOV) and Address Translation Service (ATS). Most hypervisors, Xen, KVM, Hyper-V, VMWare ESX, etc, have supported IOMMU for device pass-through, and IOMMU enables direct hardware access for paravirtualized and full virtualized guests.

4.3 SR-IOV (Single Root Input Output Virtualization)

SR-IOV is a standard mechanism defined by PCI-SIG in Sep- tember, 2007 for Revision 1.0 and January, 2010 for Revision 1.1 [13]. This I/O virtualization specification allows multiple operating systems (virtual machines) running simultaneously within a single computer (i.e, single root complex) to natively share PCI Express devices. It is to bypass the hypervisor involvement in data movement by providing independent memory space, interrupts, and DMA streams for each virtual machine. It means the I/O device should be SR-IOV capable and there should be a specific device driver in the virtual machines.

It introduces two function types, Physical Functions (PFs) and Virtual Functions (VFs) in Fig. 9. PFs are full PCIe functions with SR-IOV and its capability to conFig. and manage the SR-IOV functionality. VFs are smaller PCIe functions that contain the resources necessary for data movement but have a carefully minimized set of configuration resources. This architecture is for an I/O device to support multiple VFs, minimizing the hardware cost of each additional function by sharing the I/O device and its capability with multiple virtual machines. SR-IOV capable devices provide configurable numbers of independent VFs, each with its own PCI configuration space.

It allows virtual machines, once created by the hypervisor, to share a piece of hardware capabilities without involving the hypervisor directly for all activities. For example, a physical NIC can expose a number of VFs that can be “attached” to virtual machines. The virtual machines will see attached VFs as if it was a physical card. In other words, SR-IOV allows the creation of multiple “shadow” cards of a physical NIC. Each shadow card has its own MAC (Medium Access Control) address [14].

그림. 9. SR-IOV 구조

Fig. 9. SR-IOV architecture

../../Resources/kiee/KIEEP.2019.68.3.147/fig9.png

As it allows a virtual machine to access the dedicated VF(s) directly, it can show almost same performance as bare metal’s [15]. It can reduce the number of physical I/O devices of same type and can get better utilization of the I/O devices, by sharing the I/O devices with multiple virtual machines. However, by bypassing hypervisor, it makes live migration of virtual machines difficult. Also, as VFs typically have subset of PFs, a guest with the VFs may not have same capability available in the PFs.

Major NIC vendors, like Intel and Mellanox, have already released their NICs with SR-IOV and in commercial phase especi- ally for network intensive applications, like NFVs (Network Function Virtualization).

4.4 MR-IOV (Multi-Root Input Output Virtualization)

MR-IOV is a standard mechanism defined by PCI-SIG in May, 2008 for Revision 1.0 [13]. It can be regarded as an extension of SR-IOV for multiple root complex environment (i.e, multiple servers), so, it would be best fit for blade environment. It enables the use of single I/O device by multiple servers and multiple virtual machines simultaneously. It requires the I/O device needs to be MR-IOV capable and there should be a MR-IOV switching fabric between the I/O devices and servers, described in Fig. 10. The I/O device supposes to be placed in a separate chassis and servers may require a bus extender card to connect to the switch.

In order to implement MR-IOV specifications, three components of the system need to be developed, MR-IOV switches, MR-IOV capable I/O device, and management software for provisioning and orchestration. All three of these components need to be available simultaneously and work seamlessly, which makes the implementation difficult along with its own complexity.

그림. 10. MR-IOV 구성

Fig. 10. MR-IOV configuration

../../Resources/kiee/KIEEP.2019.68.3.147/fig10.png

Clearly, MR-IOV can give better resource utilization by sharing I/O resources with multiple servers as well as virtual machines. However, it has same drawback as that of SR-IOV, difficult live migration. Also, due to the complexity of the technology, it is relatively hard to implement at the moment.

4.5 Virtio with hardware acceleration

Virtio is a common (de facto standard) software interface to virtually expose a single physical device to multiple virtual machines, explained in Section 3.3. This virtio implementations require a hypervisor to process the virtual machine’s virtio request in communication with the physical device driver, which consumes CPU resources. A hardware accelerated virtio architecture implies that the backend is implemented in hardware, thereby freeing up the hypervisor resources to achieving a higher packet rate [6], so, it can be regarded as a hybrid solution.

One of the techniques is called vDPA (vhost Data Path Acceleration) [16]. The basic idea of vDPA is to separate the data path (directly between the virtual machine and hardware device) and control path (through hypervisor), using the standard I/O driver, virtio and its transport (). com- patible devices, which can be used as a data path accelerator, are able to use provided by virtio driver in the virtual machine directly, with DMA Enq/Deq via IOMMU. So the virtio driver in the virtual machine can exchange the data with the accelerator directly via the , without hypervisor’s in- volvement. The control path events (e.g. device start/stop) in the virtual machine are still trapped and handled by hypervisor, who delivers such events to the back-end.

This approach can give some benefits, such as much higher performance than software-based virtio, live migration support, unmodified virtio driver on guest [16], and reduced CPU utilization for device emulation. However, it requires certain updates on kernel and host device driver of hypervisor and there are possi- bilities mismatched between virtio and hardware capabilities. The vDPA is evolving and mainly driven by DPDK (Data Plane Development Kit) [17], which was initiated by Intel Corp. as their development toolkit for high performance networking processing but became open source project.

그림. 11. vDPA 구조와 동작

Fig. 11. vDPA architecture and operation

../../Resources/kiee/KIEEP.2019.68.3.147/fig11.png

The basic operation flow is same as that of virtio in virtual machine’s perspective in Fig. 11, but, the and are done between virtual machine and hardware I/O device directly instead of hypervisor. The interrupt remapping (to notify) and DMA remapping (to directly access to physical memory) are required in IOMMU. A pilot implementation has been done mainly using KVM and DPDK and under specification submission for open committees.

5. Conclusion

I/O virtualization gives many advantages, such as better hardware utilization, seamless portability of virtual machines across servers, live migration, dynamic device management while virtual machines are running, virtual views over multiple hardware, additional new feature support not provided by hardware, etc. Software-based solutions can take most of the advantages, however, these require the expense of performance and CPU resources as I/O virtuali- zation can impose a heavy load on processing when every I/O traffic needs to be processed by the CPU.

Hardware based one with pass-through technology can resolve the performance and CPU resource issues, however, it is also at the expense of loss of virtualization benefits. Hybrid solution combining software and hardware-based is under consideration to take benefits of both solutions, however, it is still in early stage.

The I/O traffic especially network I/O has been evolved at a pace that goes beyond the CPU processing capabilities, and appli- cations like NFVs are getting more popular and require more and more traffic. In this environment, understanding single solution is unlikely to meet all the requirements in cloud and telecom infrastructures, multiple solutions would be used, for example, I/O intensive applications can be utilized with the hardware-based or hybrid, while non-I/O intensive one sitting on the software-based. In any cases, a key challenge for I/O virtuali- zation is to achieve the virtualization benefits with minimal overhead.

Standardization activities are already in progress and open committees actively contribute. Most of the technologies have focused on single physical server, however, broader context of I/O virtualization with multiple servers and distributed systems needs to be addressed more. This article has reviewed the benefits and characteristics of I/O virtualization, current available technologies in software and hardware perspective, then, checked their status and trends.

References

1 
Mendel Rosenblum, Carl Waldspurger, November 2011, I/O Virtualization : Decoupling a logical device from its physical implementation offers many compelling advantages, ACM, https://queue.acm.org/detail.cfm?id=2071256Google Search
2 
Edouard Bugnion, Jason Nieh, Dan Tsafrir, 2017, Hardware and Software Support for Virtualization, Morgan & Claypool Publishers, pp. 102-107DOI
3 
Rusty Russel, July 2008, virtio: Towards a De-Facto Standard For Virtual I/O Devices, ACM SIGOPS Op. Sys. Rev., Vol. 42, No. 5, pp. 95-103DOI
4 
M. Jones, January 29, 2010, Virtio: An I/O virtualization framework for Linux, https://developer.ibm.com/articles/l-virtio/Google Search
5 
Virtio Specification downloadable, in https://docs.oasis-open.org/virtio/virtio or https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.htmlGoogle Search
6 
Mellanox White Paper, 2019, Comparison of OVS/vRouter Acceler- ation Techniques: SRIOV vs. Virtio, https://www.mellanox.com/download/whitepaper/comparison-of-ovs-vrouter-acceleration-techniques-sri-iov-and-virtio/Google Search
7 
https://github.com/virtio-win/kvm-guest-drivers-windowsGoogle Search
8 
https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/index.htmlGoogle Search
9 
https://en.wikibooks.org/wiki/QEMU/Devices/VirtioGoogle Search
10 
AMD Corp., December 2016, AMD I/O Virtualization Technology (IOMMU) Specification, Revision 3, https://www.amd.com/system/files/TechDocs/48882_IOMMU.pdfGoogle Search
11 
Intel Corp., June, 2019, Intel virtualization technology for direct I/O- Architecture Specification. Revision 3.1, https://software.intel.com/sites/default/files/managed/c5/15/vt-directed-io-spec.pdfGoogle Search
12 
ARM Ltd., 2016, ARM System Memory Management Unit Archi- tecture Specification : SMMU architecture version 2.0, http://infocenter.arm.com/help/topic/com.arm.doc.ihi0062d.c/IHI0062D_c_system_mmu_architecture_specification.pdfGoogle Search
13 
PCI Special Interest Group. https://pcisig.com/specificationsGoogle Search
14 
Bruno Chatras, Fancois Frederic Ozog, July/August 2016, Network Functions Virtualization: The Portability Challenge, IEEE NetworkDOI
15 
Y. Dong, et al., 2010, High Performance Network Virtuali- zation with SR-IOV, in Proc. 2010 IEEE 16th Int’l Symp. High Performance Computer Architecture, pp. 1-10DOI
16 
Cunming Liang, October 2018, VDPA: VHOST-MDEV, AS NET VHOST PROTOCOL TRANSPORT, in KVM Forum 2018, https://events.linuxfoundation.org/wp-content/uploads/2017/12/Cunming-Liang-Intel-KVM-Forum-2018-VDPA-VHOST-MDEV.pdfGoogle Search
17 
DPDK (Data Plane Development Kit) community, https://www.dpdk.org/Google Search

저자소개

김용근(金容槿)
../../Resources/kiee/KIEEP.2019.68.3.147/au1.png

1988 아주대학교 전자계산학과 학사

1990 아주대학교 컴퓨터공학과 석사

1990~1998 쌍용정보통신(주) 선임연구원

1995 전자계산조직응용기술사

1998~2001 Lucent Technologies 부장

2001~2002 Jetstream Comm. 이사

2003~2018 6WIND S.A. 부사장

~현재 코리아퀘스트(주) 대표이사

E-mail : mkim@koreaquest.net

Yongkeun KIM operates KoreaQuest, Inc, an IT consulting firm, and has worked for international IT companies, such as 6WIND, Jetstream com- munications, Lucent Technologies, and Ascend Communications, as well as a Korean IT com- pany (SICC), as a vice president and a senior researcher since 1990.

He holds Bachelor's and Master's degrees in Computer Engineering from Ajou University in Suwon, Korea, and also earned a Professional Engineer degree in the area of information processing.