It is rumored that fasthttp is faster than Nginx in some scenarios, indicating that fasthttp should be optimized. Let’s do some benchmark performance tests.
The first test to run is a simple hello server benchmarking. The following results are obtained on Mac. There may be differences under Linux.
fasthttp
Although it is a hello-world service, the performance of fasthttp has caught up with the service written by rust, which is indeed a bit exaggerated. This also indirectly proves that “certain scenes” may at least perform similarly to languages without garbage collection (GC).
Throughout the service process, almost all objects have been reused, including ctx (with request and response structures), readers, writers, and body read buffers. It can be seen that the author has become paranoid about memory reuse.
At the same time, for header processing, rawHeaders is a large byte array. If the parsed header’s value is of type string, it points to this large byte array, and many small objects are not repeatedly generated:
If we write the header of this kv structure ourselves, the probability is directly map[string][]string.
We can also find more obvious problems by reading the process of serveConn. After executing the user’s Handler, fasthttp will release all related objects and re-advance the object pool. In some scenarios, this is inappropriate. , For example:
When the goroutine is started asynchronously in the user process and an object such as ctx.Request is used in the goroutine. You will encounter concurrency problems because in the main process of fasthttp, the life cycle of ctx is considered to have ended, and the ctx is returned to sync .Pool, but users are still using it. To avoid this problem, users need to copy all the objects returned by various fasthttps.
From this point of view, performance optimization based on sync.Pool often comes at a price. No matter what scenario you use sync.Pool, you need to make certain assumptions about the life cycle of objects in the application. This assumption is not uncommon. Applicable to 100% scenarios. Otherwise, these methods have long entered the standard library rather than open-source libraries.
For users of the library, such optimization methods can bring a higher mental burden and online bugs. Before using open source libraries, you should pay more attention. For non-performance-sensitive business scenarios, it is better to use the standard library.