django - rabbitmq+celery memory leak? -


I'm gladly playing celery + rabbit + for one month or in the production of the demo. Tomorrow, I decided to upgrade cayenne to 2.1.4 to 2.2.4 and now the rabbit is spinning out of control. After running for a while, my nodes have not been identified by eCacom yet, and beam. The memory consumption of SAP is increasing ... slowly (100 +% CPU usage).

I can run rabbitmqctl list_connections and see that there is nothing unusual (only one of my test nodes). Me the rabbitmqctl list_queues -p & lt; VHOST & gt; , there is no message except my heartbeat from my test node. If I am letting the process run for two hours then it excludes the machine.

I have not used any code by using camqadm and have tried stop_app just hanging is the only way I can ' Fixed ', it is my rabbitmq server on kill-9 beam.smp (and all related processes) and force_reset.

I have no idea how debugging is going to go, as far as new messages are not visible to any mess. Does anyone have to walk before this? any idea? Should I see other information?

Celery developer told me that after three months ago 2.1.1 CPU rocks from RabbitMQ memory leaks Was impressed with. I am still using version 2.1.1 and I do not have this problem

It is also true that celery version 2.2.4 presented some memory problem, But if you are updating celery 2.2.5.

I hope this can help

Comments