I am running a django app and I am trying to send an API POST-request which is pretty big to my production server.
Now the problem is that what takes 4 seconds locally takes 1 minute in production. That might not sound like much but I am planning on sending this request 100 of times and every second counts.
I circled in on the problem and I think it might be an nginx configuration problem but I can't be certain. Here is my troubleshooting process with respective code:
I have a big dictionary dataset_dict = {1:1234, 2:1244 ... 525600: 124345662}
, so that means roughly half a million entries.
I send this and measure the time of my post-request
:
dataset_dict = {1:1234, 2:1244 ... 525600: 124345662}
data = {
"element": name_element,
"description": f"description of element",
"type_data": "datatype",
"data_json": dataset_dict,
}
start = datetime.datetime.now()
requests.request("post", url="myendpoint", json=data)
end = datetime.datetime.now()
runtime = end - start
print("time-post-request:", runtime)
This takes 4 seconds locally and 50 seconds in production.
So I keep going and I measure the time of only the server code. With that I mean only the code that is executed in my view. I use raw-SQL to achieve maximum performance
start_time = datetime.datetime.now()
cursor = connection.cursor()
data_json = json.dumps(request.data["data_json"])
##......code shortened for clarity
cursor.execute(
"INSERT INTO sql_table(data_json) VALUES ('{}')".format(data_json)
)
end_time = datetime.datetime.now()
runtime = end_time - start_time
print("success, time needed", runtime)
msg = {"detail": "Created successfully"}
return Response(msg, status=status.HTTP_201_CREATED)
This code on the server needs 3seconds locally and only 2seconds in production.
So my question is now: Where do the 56 seconds go?
I am inferring that I can exclude postgreSQL settings since the data injection seems to work quite fast.
Nginx settings could be a good start to look, so I was monitoring the server log and I got warn] 28#28: *9388 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000060,
In the official doc I read:
If the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
So I assume I lose time because nginx writes the request to disk. I adjust the request body size
to 4GB. I run it again and it takes almost the same amount of time (55 seconds), and I still get the warning.... Should I go even higher?
Is there any other screws I could adjust to get the performance up and the creation time down to what I have locally?? Shouldn't production servers in general be faster than local dev servers?? Maybe it's the internet connection?
So my main question: How could I increase the performance so I could get to a comparable time (seconds) for my data creation?
Specs:
Linux server
RAM: 16GB
CPUS: 4
I will post my nginx settings. I am really no expert with nginx so any help how I could increase my performance for this use-case is highly appreciated also if it doesn't solve the problem.
Nginx:
worker_processes auto;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
...
sendfile on;
tcp_nopush on;
tcp_nodelay on;
send_timeout 90;
keepalive_timeout 90;
fastcgi_read_timeout 120;
proxy_read_timeout 120;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
client_body_timeout 120;
client_body_buffer_size 4G;
client_header_buffer_size 1k;
large_client_header_buffers 4 8k;
client_header_timeout 120;
client_max_body_size 5G;
reset_timedout_connection on;
types_hash_max_size 2048;
server_tokens off;
gzip on;
gzip_static on;
gzip_min_length 512;
}
Any more info needed, I'll be happy to post it.
0 Answers