In-memory key-value store in C, Go and Python
Summary
On paternity leave for my second child, I found myself writing an in-memory hashmap (a poor-man’s memcached), in Go, Python and C. I was wondering how hard it would be to replace memcached if we wanted to do something unusual with our key-value store. I also wanted to compare my knowledge of the languages, and, well, I get bored easily!
The code is on github as Key-Value-Polyglot.
Each version implements enough of the get
and set
commands from the memcached protocol that we can test them with a memcached client.
The wonderful Internet
Thanks to a great discussion on Hacker News, in the comments here, and on reddit, I have heavily updated this post. You’re reading the new improved version.
Having the Internet review your code is humbling. It is also an amazing way to learn. The code, and my knowledge, have improved tremendously thanks to the feedback.
So far wonderful folks have contributed versions in Java, Scala, Haskell, Node.js, Ruby (event machine), Python gevent and Python diesel. If you write a version in a different language (see Spec at bottom), please send it to me, either directly or as a pull-request on github. I’ll add it to the repo.
Networking
Respond to each request with a single write. Or switch TCP_NODELAY on.
I initially thought the Go version was many times faster than the C and Python versions, but that was due to quirks in my implementation.
The C and Python versions were doing multiple small writes to respond to each “get” request. On the server side Nagle’s algorithm was buffering those small writes, waiting for an ACK from the client of it’s previous send. On the client side, Delayed ACKs meant the client wasn’t sending the needed acknowledgment until a timeout. A textbook case, assuming you know about that textbook.
The solution was to combine those multiple writes into a single one. An alternative solution would be to switch on TCP_NODELAY.
The Go version didn’t require me to know about TCP_NODELAY, which I appreciated.
Internally, Go sets the socket to be non-blocking and uses epoll and futex to wait for data. Memcached does the same thing, via libevent. I only used a single client, and the implementations are not thread-safe, so these don’t matter right now. They should matter with a large number of clients.
Writing it: Layers over Unix
Sockets: Our languages are wrapping POSIX socket system calls. C maps the calls exactly, and Python maps the C calls very closely. Go does a bit more work for us, replacing socket-bind-listen-accept with just listen-accept.
Multiple concurrent requests: Go has the magic go statement, Python has the threading library, and C has pthread_create. None of my implementations are thread safe (thanks timtadh on HN).
Read buffering: Managing the data you get from the socket was easy, once commenters taught me about C’s fdopen
. Sockets are just streams of data, it’s up to the protocol to say when a message starts and ends. For the memcached protocol, we need to be able to read until \r\n, and also read a specific amount of bytes (which might include \r\n).
Go has the bufio package. C lets you treat a socket as a file, thanks to fdopen. Python does the same thing with socket.makefile, which presumably wraps fdopen.
My C is rusty, so that version took me the longest to write, and is the least legible. The biggest slowdown was because I originally implemented buffering around the socket ‘recv’ call. Commenters (thanks kbob) pointed out I didn’t need to do that, and I replaced it.
The Python version took me a while to get right, and that’s mostly my fault. I knew Python wouldn’t make me write my own buffering code, and I spent too long playing with timeout and non-blocking setups. Once I realized I could treat the socket as a file (‘makefile’) things got easier.
Python 3’s universal newline support kindly normalizes \r\n to \n, but we’re disabling it here to preserve python 2 compatibility.
The Go version was the easiest to write because I had already used the bufio package, which solves our stream buffering problem.
Read on for how to build and profile the different versions yourself.
Build
-
Go version: You’ll need a recent weekly snapshot of Go (
hg update weekly
), or Go v1 if it is released by the time you read this. Run it with:go run memg.go
, or compile it withgo build memg.go
and run the executable it makes. -
Python version: Uses Python 2.7 or Python 3, just run the script.
-
C version: Build with
gcc -std=gnu99 -g -W -Wall -pedantic -pthread -o memg memg.c
. There should be no warnings.
Timing
This is explicitly not a benchmark, except if the question is “what can an average programmer do in a few hours?”.
All the versions perform about the same for me, and any differences are probably more due to my implementation than anything inherent in the languages. As antirez said in the Hacker News comments:
In order to perform such a comparison you can’t write three small throw-away programs, you need to optimize each version at your best for weeks to start to be meaningful, and you need an expert in the three systems.
To time the different versions on your machine run: time python test.py
. That script requires python 2 and pylibmc (pip install pylibmc
or sudo apt-get install python-pylibmc
).
Try test.py against memcached too.
Profiling
To make profiling easier, each version accepts a --single
command line parameter, so that it exits after a single connection.
strace
was very useful for inspecting all the versions (strace -T
and strace -c
). Run the following for a summary of the system calls issued, and run test.py in a different window.
strace -c <the memg version you want> --single
Python
Start it in single mode, under the profiler:
python -m cProfile -o stats memg.py --single
Run test.py
in a different window. View the output with pstats:
$ python
> import pstats
> p = pstats.Stats("stats")
> p.strip_dirs().sort_stats("cumulative").print_stats(20)
Go
Build an executable: go build memg.go
. Run memg under prof, with --single
so that it exits after the test connection closes:
go tool prof memg --single
See: prof command
You can get more detail using the runtime/pprof package and go tool pprof
to examine the created file.
C
C has Valgrind, and it’s callgrind tool is very nice:
valgrind --tool=callgrind ./memg --single
Run test.py
in a different window. View the output with kcachegrind
:
kcachegrind callgrind.out.<num>
Spec
If you’re interested in trying it in a different language, or a different approach, here’s the short spec.
The key-value stores are an in-memory hash map. The listen on port 11211, and accept set
and get
commands, so that you can reproduce the following telnet interaction:
telnet 127.0.0.1 11211
set greeting 0 0 11 # User
Hello World # User, 11 bytes
STORED # Server response
get greeting # User
VALUE greeting 0 11 # Server response
Hello World # Server response
END # Server response
You type the ‘set’, ‘Hello World’ and ‘get’ lines. The rest are replies. In the ‘set’ line, the ’11’ is the length of the value (‘Hello World’) in bytes. The two zero’s are ignored. All lines terminate with \r\n.
The repo has a test.py
script which uses pylibmc. To get pylibmc:
sudo pip install pylibmc
- OR
sudo apt-get install python-pylibmc
If you make a version in a different language, please send it to me on github, or link it in the comments.