How do I find what is using memory in a Python process in a production system?
My production system occasionally exhibits a memory leak I have not been able to reproduce in a development environment. I've used a Python memory profiler (specifically, Heapy) with some success in the development environment, but it can't help me with things I can't reproduce, and I'm reluctant to instrument our production system with Heapy because it takes a while to do its thing and its threaded remote interface does not work well in our server.
What I think I want is a way to dump a snapshot of the production Python process (or at least gc.get_objects), and then analyze it offline to see where it is using memory. How do I get a core dump of a python process like this? Once I have one, how do I do something useful with it?
Asked by: Melissa302 | Posted: 28-01-2022
Answer 1
Using Python's gc
garbage collector interface and sys.getsizeof()
it's possible to dump all the python objects and their sizes. Here's the code I'm using in production to troubleshoot a memory leak:
rss = psutil.Process(os.getpid()).get_memory_info().rss
# Dump variables if using more than 100MB of memory
if rss > 100 * 1024 * 1024:
memory_dump()
os.abort()
def memory_dump():
dump = open("memory.pickle", 'wb')
xs = []
for obj in gc.get_objects():
i = id(obj)
size = sys.getsizeof(obj, 0)
# referrers = [id(o) for o in gc.get_referrers(obj) if hasattr(o, '__class__')]
referents = [id(o) for o in gc.get_referents(obj) if hasattr(o, '__class__')]
if hasattr(obj, '__class__'):
cls = str(obj.__class__)
xs.append({'id': i, 'class': cls, 'size': size, 'referents': referents})
cPickle.dump(xs, dump)
Note that I'm only saving data from objects that have a __class__
attribute because those are the only objects I care about. It should be possible to save the complete list of objects, but you will need to take care choosing other attributes. Also, I found that getting the referrers for each object was extremely slow so I opted to save only the referents. Anyway, after the crash, the resulting pickled data can be read back like this:
with open("memory.pickle", 'rb') as dump:
objs = cPickle.load(dump)
Added 2017-11-15
The Python 3.6 version is here:
import gc
import sys
import _pickle as cPickle
def memory_dump():
with open("memory.pickle", 'wb') as dump:
xs = []
for obj in gc.get_objects():
i = id(obj)
size = sys.getsizeof(obj, 0)
# referrers = [id(o) for o in gc.get_referrers(obj) if hasattr(o, '__class__')]
referents = [id(o) for o in gc.get_referents(obj) if hasattr(o, '__class__')]
if hasattr(obj, '__class__'):
cls = str(obj.__class__)
xs.append({'id': i, 'class': cls, 'size': size, 'referents': referents})
cPickle.dump(xs, dump)
Answered by: Agata427 | Posted: 01-03-2022
Answer 2
I will expand on Brett's answer from my recent experience. Dozer package is well maintained, and despite advancements, like addition of tracemalloc
to stdlib in Python 3.4, its gc.get_objects
counting chart is my go-to tool to tackle memory leaks. Below I use dozer > 0.7
which has not been released at the time of writing (well, because I contributed a couple of fixes there recently).
Example
Let's look at a non-trivial memory leak. I'll use Celery 4.4 here and will eventually uncover a feature which causes the leak (and because it's a bug/feature kind of thing, it can be called mere misconfiguration, cause by ignorance). So there's a Python 3.6 venv where I pip install celery < 4.5
. And have the following module.
demo.py
import time
import celery
redis_dsn = 'redis://localhost'
app = celery.Celery('demo', broker=redis_dsn, backend=redis_dsn)
@app.task
def subtask():
pass
@app.task
def task():
for i in range(10_000):
subtask.delay()
time.sleep(0.01)
if __name__ == '__main__':
task.delay().get()
Basically a task which schedules a bunch of subtasks. What can go wrong?
I'll use procpath
to analyse Celery node memory consumption. pip install procpath
. I have 4 terminals:
procpath record -d celery.sqlite -i1 "$..children[?('celery' in @.cmdline)]"
to record the Celery node's process tree statsdocker run --rm -it -p 6379:6379 redis
to run Redis which will serve as Celery broker and result backendcelery -A demo worker --concurrency 2
to run the node with 2 workerspython demo.py
to finally run the example
(4) will finish under 2 minutes.
Then I use sqliteviz (pre-built version) to visualise what procpath
has recorder. I drop the celery.sqlite
there and use this query:
SELECT datetime(ts, 'unixepoch', 'localtime') ts, stat_pid, stat_rss / 256.0 rss
FROM record
And in sqliteviz I create a line chart trace with X=ts
, Y=rss
, and add split transform By=stat_pid
. The result chart is:
This shape is likely pretty familiar to anyone who fought with memory leaks.
Finding leaking objects
Now it's time for dozer
. I'll show non-instrumented case (and you can instrument your code in similar way if you can). To inject Dozer server into target process I'll use Pyrasite. There are two things to know about it:
- To run it, ptrace has to be configured as "classic ptrace permissions":
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
, which is may be a security risk - There are non-zero chances that your target Python process will crash
With that caveat I:
pip install https://github.com/mgedmin/dozer/archive/3ca74bd8.zip
(that's to-be 0.8 I mentioned above)pip install pillow
(whichdozer
uses for charting)pip install pyrasite
After that I can get Python shell in the target process:
pyrasite-shell 26572
And inject the following, which will run Dozer's WSGI application using stdlib's wsgiref
's server.
import threading
import wsgiref.simple_server
import dozer
def run_dozer():
app = dozer.Dozer(app=None, path='/')
with wsgiref.simple_server.make_server('', 8000, app) as httpd:
print('Serving Dozer on port 8000...')
httpd.serve_forever()
threading.Thread(target=run_dozer, daemon=True).start()
Opening http://localhost:8000
in a browser there should see something like:
After that I run python demo.py
from (4) again and wait for it to finish. Then in Dozer I set "Floor" to 5000, and here's what I see:
Two types related to Celery grow as the subtask are scheduled:
celery.result.AsyncResult
vine.promises.promise
weakref.WeakMethod
has the same shape and numbers and must be caused by the same thing.
Finding root cause
At this point from the leaking types and the trends it may be already clear what's going on in your case. If it's not, Dozer has "TRACE" link per type, which allows tracing (e.g. seeing object's attributes) chosen object's referrers (gc.get_referrers
) and referents (gc.get_referents
), and continue the process again traversing the graph.
But a picture says a thousand words, right? So I'll show how to use objgraph
to render chosen object's dependency graph.
pip install objgraph
apt-get install graphviz
Then:
- I run
python demo.py
from (4) again - in Dozer I set
floor=0
,filter=AsyncResult
- and click "TRACE" which should yield
Then in Pyrasite shell run:
objgraph.show_backrefs([objgraph.at(140254427663376)], filename='backref.png')
The PNG file should contain:
Basically there's some Context
object containing a list
called _children
that in turn is containing many instances of celery.result.AsyncResult
, which leak. Changing Filter=celery.*context
in Dozer here's what I see:
So the culprit is celery.app.task.Context
. Searching that type would certainly lead you to Celery task page. Quickly searching for "children" there, here's what it says:
trail = True
If enabled the request will keep track of subtasks started by this task, and this information will be sent with the result (
result.children
).
Disabling the trail by setting trail=False
like:
@app.task(trail=False)
def task():
for i in range(10_000):
subtask.delay()
time.sleep(0.01)
Then restarting the Celery node from (3) and python demo.py
from (4) yet again, shows this memory consumption.
Problem solved!
Answered by: Luke282 | Posted: 01-03-2022Answer 3
Could you record the traffic (via a log) on your production site, then re-play it on your development server instrumented with a python memory debugger? (I recommend dozer: http://pypi.python.org/pypi/Dozer)
Answered by: Max987 | Posted: 01-03-2022Answer 4
Make your program dump core, then clone an instance of the program on a sufficiently similar box using gdb. There are special macros to help with debugging python programs within gdb, but if you can get your program to concurrently serve up a remote shell, you could just continue the program's execution, and query it with python.
I have never had to do this, so I'm not 100% sure it'll work, but perhaps the pointers will be helpful.
Answered by: Julia839 | Posted: 01-03-2022Answer 5
I don't know how to dump an entire python interpreter state and restore it. It would be useful, I'll keep my eye on this answer in case anyone else has ideas.
If you have an idea where the memory is leaking, you can add checks the refcounts of your objects. For example:
x = SomeObject()
... later ...
oldRefCount = sys.getrefcount( x )
suspiciousFunction( x )
if (oldRefCount != sys.getrefcount(x)):
print "Possible memory leak..."
You could also check for reference counts higher than some number that is reasonable for your app. To take it further, you could modify the python interpreter to do these kinds of check by replacing the Py_INCREF
and Py_DECREF
macros with your own. This might be a bit dangerous in a production app, though.
Here is an essay with more info on debugging these sorts of things. It's more geared for plugin authors but most of it applies.
Answered by: Joyce240 | Posted: 01-03-2022Answer 6
The gc
module has some functions that might be useful, like listing all objects the garbage collector found to be unreachable but cannot free, or a list of all objects being tracked.
If you have a suspicion which objects might leak, the weakref module could be handy to find out if/when objects are collected.
Answered by: Maddie336 | Posted: 01-03-2022Answer 7
Meliae looks promising:
Answered by: Freddie644 | Posted: 01-03-2022This project is similar to heapy (in the 'guppy' project), in its attempt to understand how memory has been allocated.
Currently, its main difference is that it splits the task of computing summary statistics, etc of memory consumption from the actual scanning of memory consumption. It does this, because I often want to figure out what is going on in my process, while my process is consuming huge amounts of memory (1GB, etc). It also allows dramatically simplifying the scanner, as I don't allocate python objects while trying to analyze python object memory consumption.
Similar questions
python - A production ready server to serve django on win32
I'd like to serve django application on windows XP/Vista.
The application is an at hoc web interface to a windows program so it won't be put under heavy load (around 100 requests per second).
Do you know any small servers that can be easily deployed on windows to serve a django app? (IIS is not an option as the app should work on all versions of windows)
python - Is Python2.6 stable enough for production use?
Or should I just stick with Python2.5 for a bit longer?
python - Using Sql Server with Django in production
Has anybody got recent experience with deploying a Django application with an SQL Server database back end? Our workplace is heavily invested in SQL Server and will not support Django if there isn't a sufficiently developed back end for it.
I'm aware of mssql.django-pyodbc and django-mssql as unofficially supported back ends. Both projects seem to have only one person contributing which is a bit of a worry though t...
python - Does anybody use DjVu files in their production tools?
When it's about archiving and doc portability, it's all about PDF. I heard about DjVu somes years ago, and it seems to be now mature enough for serious usages. The benefits seems to be a small size format and a fast open / read experience.
But I have absolutely no feedback on how good / bad it is in the real world :
Is it technically hard to imp...
django - Production quality Python opensocial container and client?
Is there any production quality library for developing opensocial containers and clients in python and django?
adobe - Python CDROM Production
I have been using Macromedia / Adobe Director & Lingo since 1998. I am extremely familiar with using this software to create CDROMs and DVDs and also have a good knowledge of design elements and their integration such as flash videos, images & audio etc.
I am always keen to explore other technologies and understand that Python can be used to create CDROMs.
I have tried Googling some information on t...
python - Turning on DEBUG on a Django production site
I'm using the Django ORM in a non-Django application and would like to turn on the DEBUG setting so that I can periodically log my queries. So I have something vaguely like this:
from django.db import connection
def thread_main_loop():
while keep_going:
connection.queries[:] = []
...
Use of infix operator hack in production code (Python)
What is your opinion of using the infix operator hack in production code? Issues:
The effect this will have on speed.
The potential for a clashes with an object with these operators already defined. This seems particularly dangerous with generic code that is intended to handle objects of any type....
python - What is the most common way to configure static files in debug and production for Django
When developing a Django application in debug mode, I serve static files using the following code:
if settings.DEBUG:
urlpatterns += patterns('',
(r'^m/(?P<path>.*)$', serve, {
'document_root' : os.path.join(os.path.dirname(__file__), "media")
})
)
I am using nginx as a front end to server my static files in production mode using the following ngin...
python & SQLite in Production
When using python, can SQLite be used as production database to manage say 10,000 databases files (each in a separate file -- of size 500[MB])?
Only one thread will be used to write data to the database files (no concurrency).
Are there alternatives libraries that can work better / faster / more reliably?
Still can't find your answer? Check out these communities...
PySlackers | Full Stack Python | NHS Python | Pythonist Cafe | Hacker Earth | Discord Python