Skip to content

Commit 58439f4

Browse files
authored
Merge branch 'master' into master
2 parents f315e19 + 7f11d42 commit 58439f4

17 files changed

Lines changed: 671 additions & 186 deletions

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,16 @@
11
.idea
2+
.vscode
23
dist
34
build
45
MANIFEST
56
*.egg-info
67
*.pyc
78
*~
9+
.coverage
810

911
# Ignore mprof generated files
1012
mprofile_*.dat
1113

1214
# virtual environment
1315
venv/
16+
.python-version

.travis.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
language: python
22
python:
3-
- "2.7"
4-
- "3.4"
53
- "3.5"
64
- "3.6"
7-
- "3.7-dev"
5+
- "3.7"
6+
- "3.8"
7+
- "3.9"
88
- "pypy3"
99

1010
matrix:

MANIFEST.in

Lines changed: 0 additions & 3 deletions
This file was deleted.

Makefile

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
PYTHON ?= python
22

3-
.PHONY: test
3+
.PHONY: test develop
44

55
test:
66
$(PYTHON) -m memory_profiler test/test_func.py
@@ -18,6 +18,8 @@ test:
1818
$(PYTHON) test/test_exception.py
1919
$(PYTHON) test/test_exit_code.py
2020
$(PYTHON) test/test_mprof.py
21+
$(PYTHON) test/test_async.py
22+
mprof run test/test_func.py
2123

2224
develop:
2325
pip install -e .

README.rst

Lines changed: 52 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,9 @@
55
Memory Profiler
66
=================
77

8+
9+
**Note:** This package is no longer actively maintained. If you'd like to volunteer to maintain it, please drop me a line at f@bianp.net
10+
811
This is a python module for monitoring memory consumption of a process
912
as well as line-by-line analysis of memory consumption for python
1013
programs. It is a pure python module which depends on the `psutil
@@ -23,7 +26,7 @@ The package is also available on `conda-forge
2326

2427
To install from source, download the package, extract and type::
2528

26-
$ python setup.py install
29+
$ pip install .
2730

2831

2932
=======
@@ -64,14 +67,14 @@ this would result in::
6467

6568
Output will follow::
6669

67-
Line # Mem usage Increment Line Contents
68-
==============================================
69-
3 @profile
70-
4 5.97 MB 0.00 MB def my_func():
71-
5 13.61 MB 7.64 MB a = [1] * (10 ** 6)
72-
6 166.20 MB 152.59 MB b = [2] * (2 * 10 ** 7)
73-
7 13.61 MB -152.59 MB del b
74-
8 13.61 MB 0.00 MB return a
70+
Line # Mem usage Increment Occurrences Line Contents
71+
============================================================
72+
3 38.816 MiB 38.816 MiB 1 @profile
73+
4 def my_func():
74+
5 46.492 MiB 7.676 MiB 1 a = [1] * (10 ** 6)
75+
6 199.117 MiB 152.625 MiB 1 b = [2] * (2 * 10 ** 7)
76+
7 46.629 MiB -152.488 MiB 1 del b
77+
8 46.629 MiB 0.000 MiB 1 return a
7578

7679

7780
The first column represents the line number of the code that has been
@@ -179,7 +182,7 @@ track the usage of child processes: sum the memory of all children to the
179182
parent's usage and track each child individual.
180183

181184
To create a report that combines memory usage of all the children and the
182-
parent, use the ``include_children`` flag in either the ``profile`` decorator or
185+
parent, use the ``include-children`` flag in either the ``profile`` decorator or
183186
as a command line argument to ``mprof``::
184187

185188
mprof run --include-children <script>
@@ -197,7 +200,7 @@ This will create a plot using matplotlib similar to this:
197200
:target: https://github.com/pythonprofilers/memory_profiler/pull/134
198201
:height: 350px
199202

200-
You can combine both the ``include_children`` and ``multiprocess`` flags to show
203+
You can combine both the ``include-children`` and ``multiprocess`` flags to show
201204
the total memory of the program as well as each child individually. If using
202205
the API directly, note that the return from ``memory_usage`` will include the
203206
child memory in a nested list along with the main process memory.
@@ -214,6 +217,21 @@ You can also hide the function timestamps using the ``n`` flag, such as
214217

215218
mprof plot -n
216219

220+
Trend lines and its numeric slope can be plotted using the ``s`` flag, such as
221+
222+
mprof plot -s
223+
224+
.. image:: ./images/trend_slope.png
225+
:height: 350px
226+
227+
The intended usage of the -s switch is to check the labels' numerical slope over a significant time period for :
228+
229+
- ``>0`` it might mean a memory leak.
230+
- ``~0`` if 0 or near 0, the memory usage may be considered stable.
231+
- ``<0`` to be interpreted depending on the expected process memory usage patterns, also might mean that the sampling period is too small.
232+
233+
The trend lines are for ilustrative purposes and are plotted as (very) small dashed lines.
234+
217235

218236
Setting debugger breakpoints
219237
=============================
@@ -392,6 +410,25 @@ file ~/.ipython/ipy_user_conf.py to add the following lines::
392410
import memory_profiler
393411
memory_profiler.load_ipython_extension(ip)
394412

413+
===============================
414+
Memory tracking backends
415+
===============================
416+
`memory_profiler` supports different memory tracking backends including: 'psutil', 'psutil_pss', 'psutil_uss', 'posix', 'tracemalloc'.
417+
If no specific backend is specified the default is to use "psutil" which measures RSS aka "Resident Set Size".
418+
In some cases (particularly when tracking child processes) RSS may overestimate memory usage (see `example/example_psutil_memory_full_info.py` for an example).
419+
For more information on "psutil_pss" (measuring PSS) and "psutil_uss" please refer to:
420+
https://psutil.readthedocs.io/en/latest/index.html?highlight=memory_info#psutil.Process.memory_full_info
421+
422+
Currently, the backend can be set via the CLI
423+
424+
$ python -m memory_profiler --backend psutil my_script.py
425+
426+
and is exposed by the API
427+
428+
>>> from memory_profiler import memory_usage
429+
>>> mem_usage = memory_usage(-1, interval=.2, timeout=1, backend="psutil")
430+
431+
395432
============================
396433
Frequently Asked Questions
397434
============================
@@ -409,7 +446,6 @@ file ~/.ipython/ipy_user_conf.py to add the following lines::
409446
`psutil <http://pypi.python.org/pypi/psutil>`_ module.
410447

411448

412-
413449
===========================
414450
Support, bugs & wish list
415451
===========================
@@ -419,9 +455,9 @@ Send issues, proposals, etc. to `github's issue tracker
419455
<https://github.com/pythonprofilers/memory_profiler/issues>`_ .
420456

421457
If you've got questions regarding development, you can email me
422-
directly at fabian@fseoane.net
458+
directly at f@bianp.net
423459

424-
.. image:: http://fseoane.net/static/tux_memory_small.png
460+
.. image:: http://fa.bianp.net/static/tux_memory_small.png
425461

426462

427463
=============
@@ -471,6 +507,8 @@ cleanup.
471507

472508
`Juan Luis Cano <https://github.com/Juanlu001>`_ modernized the infrastructure and helped with various things.
473509

510+
`Martin Becker <https://github.com/mgbckr>`_ added PSS and USS tracking via the psutil backend.
511+
474512
=========
475513
License
476514
=========

examples/async_decorator.py

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
import asyncio
2+
3+
from memory_profiler import profile
4+
5+
6+
@profile
7+
@asyncio.coroutine
8+
def foo():
9+
a = [1] * (10 ** 6)
10+
b = [2] * (2 * 10 ** 7)
11+
yield from asyncio.sleep(1)
12+
del b
13+
return a
14+
15+
16+
if __name__ == "__main__":
17+
loop = asyncio.get_event_loop()
18+
loop.run_until_complete(foo())
Lines changed: 139 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,139 @@
1+
from memory_profiler import memory_usage
2+
3+
# size = 50000
4+
size = 3000
5+
6+
7+
def test_simple():
8+
9+
import numpy as np
10+
11+
def func():
12+
a = np.random.random((size, size))
13+
return a
14+
15+
rss = memory_usage(proc=func, max_usage=True, backend="psutil")
16+
uss = memory_usage(proc=func, max_usage=True, backend="psutil_uss")
17+
pss = memory_usage(proc=func, max_usage=True, backend="psutil_pss")
18+
print(rss, uss, pss)
19+
20+
21+
def test_multiprocessing():
22+
23+
import numpy as np
24+
import joblib
25+
import time
26+
27+
def func():
28+
n_jobs = 4
29+
a = np.random.random((size, size))
30+
31+
def subprocess(i):
32+
time.sleep(2)
33+
return a[i,i]
34+
35+
results = joblib.Parallel(n_jobs=n_jobs)(
36+
joblib.delayed(subprocess)(i)
37+
for i in range(n_jobs))
38+
39+
return results
40+
41+
rss = memory_usage(proc=func, max_usage=True, backend="psutil", include_children=True, multiprocess=True)
42+
uss = memory_usage(proc=func, max_usage=True, backend="psutil_uss", include_children=True, multiprocess=True)
43+
pss = memory_usage(proc=func, max_usage=True, backend="psutil_pss", include_children=True, multiprocess=True)
44+
print(rss, uss, pss)
45+
46+
47+
def test_multiprocessing_write():
48+
49+
import numpy as np
50+
import joblib
51+
import time
52+
53+
def func():
54+
n_jobs = 4
55+
a = np.random.random((size, size))
56+
57+
def subprocess(i):
58+
aa = a.copy()
59+
time.sleep(2)
60+
return aa[i,i]
61+
62+
results = joblib.Parallel(n_jobs=n_jobs)(
63+
joblib.delayed(subprocess)(i)
64+
for i in range(n_jobs))
65+
66+
return results
67+
68+
rss = memory_usage(proc=func, max_usage=True, backend="psutil", include_children=True, multiprocess=True)
69+
uss = memory_usage(proc=func, max_usage=True, backend="psutil_uss", include_children=True, multiprocess=True)
70+
pss = memory_usage(proc=func, max_usage=True, backend="psutil_pss", include_children=True, multiprocess=True)
71+
print(rss, uss, pss)
72+
73+
74+
def test_multiprocessing_showcase():
75+
76+
import numpy as np
77+
import joblib
78+
import time
79+
import datetime
80+
81+
def func():
82+
83+
# n_jobs = 32
84+
# size = 25000
85+
# Creating data: 25000x25000 ... done (4.66 Gb). Starting processing: n_jobs=32 ... done (0:00:37.581291). RSS: 353024.01
86+
# Creating data: 25000x25000 ... done (4.66 Gb). Starting processing: n_jobs=32 ... done (0:00:38.867385). USS: 148608.62
87+
# Creating data: 25000x25000 ... done (4.66 Gb). Starting processing: n_jobs=32 ... done (0:00:29.049754). PSS: 169253.91
88+
89+
# n_jobs = 64
90+
# size = 10000
91+
# Creating data: 10000x10000 ... done (0.75 Gb). Starting processing: n_jobs=64 ... done (0:00:14.701243). RSS: 111362.79
92+
# Creating data: 10000x10000 ... done (0.75 Gb). Starting processing: n_jobs=64 ... done (0:00:15.020202). USS: 56108.69
93+
# Creating data: 10000x10000 ... done (0.75 Gb). Starting processing: n_jobs=64 ... done (0:00:15.072918). PSS: 54826.61
94+
95+
# Conclusion:
96+
# * RSS is overestimating like crazy (I checked the actual memory usage using htop)
97+
98+
n_jobs = 8
99+
size = 3000
100+
101+
print("Creating data: {size}x{size} ... ".format(size=size), end="")
102+
a = np.random.random((size, size))
103+
print("done ({size:.02f} Gb). ".format(size=a.size * a.itemsize / 1024**3), end="")
104+
105+
def subprocess(i):
106+
aa = a.copy()
107+
r = aa[1,1]
108+
aa = a.copy()
109+
time.sleep(10)
110+
return r
111+
112+
# r = a[1,1]
113+
# # time.sleep(10)
114+
# return r
115+
116+
pass
117+
118+
start = datetime.datetime.now()
119+
print("Starting processing: n_jobs={n_jobs} ... ".format(n_jobs=n_jobs), end="")
120+
results = joblib.Parallel(n_jobs=n_jobs)(
121+
joblib.delayed(subprocess)(i)
122+
for i in range(n_jobs))
123+
print("done ({}). ".format(datetime.datetime.now() - start), end="")
124+
125+
return results
126+
127+
rss = memory_usage(proc=func, max_usage=True, backend="psutil", include_children=True, multiprocess=True)
128+
print("RSS: {rss:.02f}".format(rss=rss))
129+
uss = memory_usage(proc=func, max_usage=True, backend="psutil_uss", include_children=True, multiprocess=True)
130+
print("USS: {uss:.02f}".format(uss=uss))
131+
pss = memory_usage(proc=func, max_usage=True, backend="psutil_pss", include_children=True, multiprocess=True)
132+
print("PSS: {pss:.02f}".format(pss=pss))
133+
134+
135+
if __name__ == "__main__":
136+
test_simple()
137+
test_multiprocessing()
138+
test_multiprocessing_write()
139+
test_multiprocessing_showcase()

images/trend_slope.png

55.6 KB
Loading

0 commit comments

Comments
 (0)