In many development and production environments, communication with a Linux virtual machine is taken for granted over the network: internal IP addresses, SSH, firewall rules, routes, tunnels… But when the host and the VM share the same physical machine, all that networking scaffolding can be pure overhead.
This is where AF_VSOCK comes into play, an address family in the Linux kernel designed specifically for communication between the hypervisor and its virtual machines. Combined with gRPC, it allows high-performance services between host and VM without a single packet ever leaving the machine.
What exactly is AF_VSOCK?
AF_VSOCK is an address family (like AF_INET for IPv4 or AF_UNIX for local sockets) designed for host ↔ guest channels inside the same physical host.
Instead of using IP addresses such as 192.168.x.x, AF_VSOCK uses:
- CID (Context ID): identifies who is who.
- The host is usually CID 2.
- Each VM gets a unique CID (for example 3, 4, etc.).
- Port: similar to TCP ports (for example
9999).
With these two values (cid:port), the kernel routes data between host and VM without going through the TCP/IP stack. There is no DHCP, no NAT, no routing tables, no external firewalls in the middle.
In practice, it is like having a direct pipe that punches through the VM wall.
Why gRPC + vsock is such a powerful combination
Any socket-based protocol can run over this AF_VSOCK channel. A particularly interesting example today is gRPC, Google’s high-performance RPC framework.
Together they offer several clear advantages:
- Lower latency: since the full network stack is skipped, overhead is reduced. Data moves directly between host and guest memory.
- No IP and no SSH: there is no need to assign an internal IP to the VM or open ports. Everything is identified by CID and vsock port.
- Smaller attack surface: if there is no exposed network interface, there is no traffic to intercept from the outside.
(You still need to secure things inside the host, but the “network” vector disappears.) - Operational simplicity: no need to manage firewall rules, port forwarding or complex networking just so the host can talk to its own VM.
In short: the VM remains isolated, but the host can talk to it as if there were an extended local socket.
A gRPC server inside the VM, a client on the host
In the example shown with the original explanation, there is a small C++ gRPC service exposing a basic addition operation:
class VsockServiceImpl final : public VsockService::Service {
Status Addition(ServerContext* context, const AdditionRequest* request,
AdditionResponse* response) override {
int32_t result = request->a() + request->b();
response->set_c(result);
std::cout << "Addition: " << request->a() << " + " << request->b()
<< " = " << result << std::endl;
return Status::OK;
}
};
This service is published inside the VM, listening on a vsock address:
void RunServer() {
std::string server_address("vsock:3:9999"); // CID 3, port 9999
VsockServiceImpl service;
ServerBuilder builder;
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
builder.RegisterService(&service);
std::unique_ptr<Server> server(builder.BuildAndStart());
std::cout << "Server listening on " << server_address << std::endl;
server->Wait();
}
Code language: PHP (php)
- CID 3 would correspond to the guest VM.
- Port 9999 is the vsock port where gRPC requests are served.
On the host, the gRPC client simply opens a channel to vsock:CID:port and sends the request as it would to any “normal” gRPC server. When the client runs, the response comes back immediately:
Addition result: 5 + 7 = 12
All of this happens without configuring a single IP or bringing up a network interface inside the VM.
Use cases where AF_VSOCK makes a lot of sense
This architecture is not just a technical curiosity; it fits especially well in several scenarios:
1. Sandboxing and security
When running potentially dangerous processes inside VMs (for example malware analyzers, untrusted builds or user-submitted code), maximum isolation is key.
AF_VSOCK allows the controller on the host to communicate with the VM to send jobs, receive results or monitor it, without exposing any network port accessible from outside.
2. Local development infrastructure
In development environments, it is common to spin up VMs with internal services (databases, message queues, microservices). With vsock, the developer can talk to those services from the host without dealing with virtual networks, IPs, etc.
For automated tests or reproducible environments, this simplifies configuration quite a bit.
3. System functions and internal agents
Hypervisors, virtualization platforms or security solutions can use vsock so that agents inside the VM communicate with the orchestrator on the host without depending on the guest’s network.
This is useful, for example, to:
- Update agents.
- Collect internal metrics and logs.
- Send shutdown, snapshot or maintenance commands.
Limitations and practical considerations
Not everything is a win; there are some points to keep in mind:
- Hypervisor dependency: AF_VSOCK is available on Linux and supported by hypervisors like KVM, VMware or Hyper-V (with some nuances). It is not a universal solution for every environment.
- It doesn’t replace “real” networking: if the VM needs to talk to external servers, it still needs its regular network interface.
- Debugging and visibility: since there is no IP traffic, classic tools like
tcpdumporwiresharkare not directly useful. You have to rely on specific utilities or on the application’s own logging. - Specific API: although the application’s gRPC logic does not change, the way the channel is opened (vsock address, server configuration) does require some dedicated code.
Even so, for high-frequency, low-latency host ↔ guest communication, the benefits clearly outweigh the initial effort.
What’s next: repo and detailed guide
The developer who showcased this example is preparing a public repository with all the code needed to:
- Start a gRPC server over vsock inside a Linux VM.
- Run a client on the host that connects via AF_VSOCK.
- Measure latencies and compare them with a traditional TCP/IP connection.
The plan is also to publish a deep dive explaining step by step how to integrate vsock into your own tools: from configuring the hypervisor and CIDs to using the socket API in C/C++ or in other languages that wrap gRPC.
For anyone who enjoys low-level Linux plumbing, it is a perfect chance to experiment with a kernel feature that, until now, has been relatively unknown outside the virtualization world.
Frequently asked questions about AF_VSOCK and gRPC
How is AF_VSOCK different from a traditional UNIX socket?
UNIX sockets (AF_UNIX) only work inside the same operating system. AF_VSOCK, by contrast, is designed to communicate between the host and its VMs, crossing the virtualization boundary without using IP networking.
Can I use AF_VSOCK with languages other than C++?
Yes. gRPC supports multiple languages (Go, Java, Python, etc.). As long as the gRPC binding lets you specify a vsock address (or you can open a vsock socket and use it as the transport), you can reuse the same concept in almost any language.
Does AF_VSOCK always outperform TCP/IP?
For high-frequency host–VM communication it usually offers lower latency and less overhead, because it avoids the full network stack. Actual performance depends on the use case, the load, the hypervisor and the specific implementation.
Is this approach suitable for production?
In scenarios where host and VM share a physical machine and only need to talk to each other (monitoring, agents, internal services), it can be a very solid option. For internet-facing services or communication between multiple physical machines, you still need conventional TCP/IP.
Source: X
