Create a list with initial capacity in Python
Code like this often happens:
l = []
while foo:
# baz
l.append(bar)
# qux
This is really slow if you're about to append thousands of elements to your list, as the list will have to be constantly resized to fit the new elements.
In Java, you can create an ArrayList with an initial capacity. If you have some idea how big your list will be, this will be a lot more efficient.
I understand that code like this can often be refactored into a list comprehension. If the for/while loop is very complicated, though, this is unfeasible. Is there an equivalent for us Python programmers?
Asked by: Aida245 | Posted: 28-01-2022
Answer 1
Warning: This answer is contested. See comments.
def doAppend( size=10000 ):
result = []
for i in range(size):
message= "some unique object %d" % ( i, )
result.append(message)
return result
def doAllocate( size=10000 ):
result=size*[None]
for i in range(size):
message= "some unique object %d" % ( i, )
result[i]= message
return result
Results. (evaluate each function 144 times and average the duration)
simple append 0.0102
pre-allocate 0.0098
Conclusion. It barely matters.
Premature optimization is the root of all evil.
Answered by: Kelvin292 | Posted: 01-03-2022Answer 2
Python lists have no built-in pre-allocation. If you really need to make a list, and need to avoid the overhead of appending (and you should verify that you do), you can do this:
l = [None] * 1000 # Make a list of 1000 None's
for i in xrange(1000):
# baz
l[i] = bar
# qux
Perhaps you could avoid the list by using a generator instead:
def my_things():
while foo:
#baz
yield bar
#qux
for thing in my_things():
# do something with thing
This way, the list isn't every stored all in memory at all, merely generated as needed.
Answered by: Kelsey472 | Posted: 01-03-2022Answer 3
Short version: use
pre_allocated_list = [None] * size
to preallocate a list (that is, to be able to address 'size' elements of the list instead of gradually forming the list by appending). This operation is very fast, even on big lists. Allocating new objects that will be later assigned to list elements will take much longer and will be the bottleneck in your program, performance-wise.
Long version:
I think that initialization time should be taken into account.
Since in Python everything is a reference, it doesn't matter whether you set each element into None or some string - either way it's only a reference. Though it will take longer if you want to create a new object for each element to reference.
For Python 3.2:
import time
import copy
def print_timing (func):
def wrapper (*arg):
t1 = time.time()
res = func (*arg)
t2 = time.time ()
print ("{} took {} ms".format (func.__name__, (t2 - t1) * 1000.0))
return res
return wrapper
@print_timing
def prealloc_array (size, init = None, cp = True, cpmethod = copy.deepcopy, cpargs = (), use_num = False):
result = [None] * size
if init is not None:
if cp:
for i in range (size):
result[i] = init
else:
if use_num:
for i in range (size):
result[i] = cpmethod (i)
else:
for i in range (size):
result[i] = cpmethod (cpargs)
return result
@print_timing
def prealloc_array_by_appending (size):
result = []
for i in range (size):
result.append (None)
return result
@print_timing
def prealloc_array_by_extending (size):
result = []
none_list = [None]
for i in range (size):
result.extend (none_list)
return result
def main ():
n = 1000000
x = prealloc_array_by_appending(n)
y = prealloc_array_by_extending(n)
a = prealloc_array(n, None)
b = prealloc_array(n, "content", True)
c = prealloc_array(n, "content", False, "some object {}".format, ("blah"), False)
d = prealloc_array(n, "content", False, "some object {}".format, None, True)
e = prealloc_array(n, "content", False, copy.deepcopy, "a", False)
f = prealloc_array(n, "content", False, copy.deepcopy, (), False)
g = prealloc_array(n, "content", False, copy.deepcopy, [], False)
print ("x[5] = {}".format (x[5]))
print ("y[5] = {}".format (y[5]))
print ("a[5] = {}".format (a[5]))
print ("b[5] = {}".format (b[5]))
print ("c[5] = {}".format (c[5]))
print ("d[5] = {}".format (d[5]))
print ("e[5] = {}".format (e[5]))
print ("f[5] = {}".format (f[5]))
print ("g[5] = {}".format (g[5]))
if __name__ == '__main__':
main()
Evaluation:
prealloc_array_by_appending took 118.00003051757812 ms
prealloc_array_by_extending took 102.99992561340332 ms
prealloc_array took 3.000020980834961 ms
prealloc_array took 49.00002479553223 ms
prealloc_array took 316.9999122619629 ms
prealloc_array took 473.00004959106445 ms
prealloc_array took 1677.9999732971191 ms
prealloc_array took 2729.999780654907 ms
prealloc_array took 3001.999855041504 ms
x[5] = None
y[5] = None
a[5] = None
b[5] = content
c[5] = some object blah
d[5] = some object 5
e[5] = a
f[5] = []
g[5] = ()
As you can see, just making a big list of references to the same None object takes very little time.
Prepending or extending takes longer (I didn't average anything, but after running this a few times I can tell you that extending and appending take roughly the same time).
Allocating new object for each element - that is what takes the most time. And S.Lott's answer does that - formats a new string every time. Which is not strictly required - if you want to preallocate some space, just make a list of None, then assign data to list elements at will. Either way it takes more time to generate data than to append/extend a list, whether you generate it while creating the list, or after that. But if you want a sparsely-populated list, then starting with a list of None is definitely faster.
Answered by: Maria642 | Posted: 01-03-2022Answer 4
The Pythonic way for this is:
x = [None] * numElements
Or whatever default value you wish to prepopulate with, e.g.
bottles = [Beer()] * 99
sea = [Fish()] * many
vegetarianPizzas = [None] * peopleOrderingPizzaNotQuiche
(Caveat Emptor: The [Beer()] * 99
syntax creates one Beer
and then populates an array with 99 references to the same single instance)
Python's default approach can be pretty efficient, although that efficiency decays as you increase the number of elements.
Compare
import time
class Timer(object):
def __enter__(self):
self.start = time.time()
return self
def __exit__(self, *args):
end = time.time()
secs = end - self.start
msecs = secs * 1000 # Millisecs
print('%fms' % msecs)
Elements = 100000
Iterations = 144
print('Elements: %d, Iterations: %d' % (Elements, Iterations))
def doAppend():
result = []
i = 0
while i < Elements:
result.append(i)
i += 1
def doAllocate():
result = [None] * Elements
i = 0
while i < Elements:
result[i] = i
i += 1
def doGenerator():
return list(i for i in range(Elements))
def test(name, fn):
print("%s: " % name, end="")
with Timer() as t:
x = 0
while x < Iterations:
fn()
x += 1
test('doAppend', doAppend)
test('doAllocate', doAllocate)
test('doGenerator', doGenerator)
with
#include <vector>
typedef std::vector<unsigned int> Vec;
static const unsigned int Elements = 100000;
static const unsigned int Iterations = 144;
void doAppend()
{
Vec v;
for (unsigned int i = 0; i < Elements; ++i) {
v.push_back(i);
}
}
void doReserve()
{
Vec v;
v.reserve(Elements);
for (unsigned int i = 0; i < Elements; ++i) {
v.push_back(i);
}
}
void doAllocate()
{
Vec v;
v.resize(Elements);
for (unsigned int i = 0; i < Elements; ++i) {
v[i] = i;
}
}
#include <iostream>
#include <chrono>
using namespace std;
void test(const char* name, void(*fn)(void))
{
cout << name << ": ";
auto start = chrono::high_resolution_clock::now();
for (unsigned int i = 0; i < Iterations; ++i) {
fn();
}
auto end = chrono::high_resolution_clock::now();
auto elapsed = end - start;
cout << chrono::duration<double, milli>(elapsed).count() << "ms\n";
}
int main()
{
cout << "Elements: " << Elements << ", Iterations: " << Iterations << '\n';
test("doAppend", doAppend);
test("doReserve", doReserve);
test("doAllocate", doAllocate);
}
On my Windows 7 Core i7, 64-bit Python gives
Elements: 100000, Iterations: 144
doAppend: 3587.204933ms
doAllocate: 2701.154947ms
doGenerator: 1721.098185ms
While C++ gives (built with Microsoft Visual C++, 64-bit, optimizations enabled)
Elements: 100000, Iterations: 144
doAppend: 74.0042ms
doReserve: 27.0015ms
doAllocate: 5.0003ms
C++ debug build produces:
Elements: 100000, Iterations: 144
doAppend: 2166.12ms
doReserve: 2082.12ms
doAllocate: 273.016ms
The point here is that with Python you can achieve a 7-8% performance improvement, and if you think you're writing a high-performance application (or if you're writing something that is used in a web service or something) then that isn't to be sniffed at, but you may need to rethink your choice of language.
Also, the Python code here isn't really Python code. Switching to truly Pythonesque code here gives better performance:
import time
class Timer(object):
def __enter__(self):
self.start = time.time()
return self
def __exit__(self, *args):
end = time.time()
secs = end - self.start
msecs = secs * 1000 # millisecs
print('%fms' % msecs)
Elements = 100000
Iterations = 144
print('Elements: %d, Iterations: %d' % (Elements, Iterations))
def doAppend():
for x in range(Iterations):
result = []
for i in range(Elements):
result.append(i)
def doAllocate():
for x in range(Iterations):
result = [None] * Elements
for i in range(Elements):
result[i] = i
def doGenerator():
for x in range(Iterations):
result = list(i for i in range(Elements))
def test(name, fn):
print("%s: " % name, end="")
with Timer() as t:
fn()
test('doAppend', doAppend)
test('doAllocate', doAllocate)
test('doGenerator', doGenerator)
Which gives
Elements: 100000, Iterations: 144
doAppend: 2153.122902ms
doAllocate: 1346.076965ms
doGenerator: 1614.092112ms
(in 32-bit, doGenerator does better than doAllocate).
Here the gap between doAppend and doAllocate is significantly larger.
Obviously, the differences here really only apply if you are doing this more than a handful of times or if you are doing this on a heavily loaded system where those numbers are going to get scaled out by orders of magnitude, or if you are dealing with considerably larger lists.
The point here: Do it the Pythonic way for the best performance.
But if you are worrying about general, high-level performance, Python is the wrong language. The most fundamental problem being that Python function calls has traditionally been up to 300x slower than other languages due to Python features like decorators, etc. (PythonSpeed/PerformanceTips, Data Aggregation).
Answered by: Michael997 | Posted: 01-03-2022Answer 5
As others have mentioned, the simplest way to preseed a list is with NoneType
objects.
That being said, you should understand the way Python lists actually work before deciding this is necessary.
In the CPython implementation of a list, the underlying array is always created with overhead room, in progressively larger sizes ( 4, 8, 16, 25, 35, 46, 58, 72, 88, 106, 126, 148, 173, 201, 233, 269, 309, 354, 405, 462, 526, 598, 679, 771, 874, 990, 1120, etc)
, so that resizing the list does not happen nearly so often.
Because of this behavior, most list.append()
functions are O(1)
complexity for appends, only having increased complexity when crossing one of these boundaries, at which point the complexity will be O(n)
. This behavior is what leads to the minimal increase in execution time in S.Lott's answer.
Source: Python list implementation
Answered by: Maria913 | Posted: 01-03-2022Answer 6
Concerns about preallocation in Python arise if you're working with NumPy, which has more C-like arrays. In this instance, preallocation concerns are about the shape of the data and the default value.
Consider NumPy if you're doing numerical computation on massive lists and want performance.
Answered by: Kirsten729 | Posted: 01-03-2022Answer 7
I ran S.Lott's code and produced the same 10% performance increase by preallocating. I tried Ned Batchelder's idea using a generator and was able to see the performance of the generator better than that of the doAllocate. For my project the 10% improvement matters, so thanks to everyone as this helps a bunch.
def doAppend(size=10000):
result = []
for i in range(size):
message = "some unique object %d" % ( i, )
result.append(message)
return result
def doAllocate(size=10000):
result = size*[None]
for i in range(size):
message = "some unique object %d" % ( i, )
result[i] = message
return result
def doGen(size=10000):
return list("some unique object %d" % ( i, ) for i in xrange(size))
size = 1000
@print_timing
def testAppend():
for i in xrange(size):
doAppend()
@print_timing
def testAlloc():
for i in xrange(size):
doAllocate()
@print_timing
def testGen():
for i in xrange(size):
doGen()
testAppend()
testAlloc()
testGen()
Output
testAppend took 14440.000ms
testAlloc took 13580.000ms
testGen took 13430.000ms
Answered by: Michelle737 | Posted: 01-03-2022
Answer 8
Python's list
doesn't support preallocation. Numpy allows you to preallocate memory, but in practice it doesn't seem to be worth it if your goal is to speed up the program.
This test simply writes an integer into the list, but in a real application you'd likely do more complicated things per iteration, which further reduces the importance of the memory allocation.
import timeit
import numpy as np
def list_append(size=1_000_000):
result = []
for i in range(size):
result.append(i)
return result
def list_prealloc(size=1_000_000):
result = [None] * size
for i in range(size):
result[i] = i
return result
def numpy_prealloc(size=1_000_000):
result = np.empty(size, np.int32)
for i in range(size):
result[i] = i
return result
setup = 'from __main__ import list_append, list_prealloc, numpy_prealloc'
print(timeit.timeit('list_append()', setup=setup, number=10)) # 0.79
print(timeit.timeit('list_prealloc()', setup=setup, number=10)) # 0.62
print(timeit.timeit('numpy_prealloc()', setup=setup, number=10)) # 0.73
Answered by: Miranda773 | Posted: 01-03-2022
Answer 9
For some applications, a dictionary may be what you are looking for. For example, in the find_totient method, I found it more convenient to use a dictionary since I didn't have a zero index.
def totient(n):
totient = 0
if n == 1:
totient = 1
else:
for i in range(1, n):
if math.gcd(i, n) == 1:
totient += 1
return totient
def find_totients(max):
totients = dict()
for i in range(1,max+1):
totients[i] = totient(i)
print('Totients:')
for i in range(1,max+1):
print(i,totients[i])
This problem could also be solved with a preallocated list:
def find_totients(max):
totients = None*(max+1)
for i in range(1,max+1):
totients[i] = totient(i)
print('Totients:')
for i in range(1,max+1):
print(i,totients[i])
I feel that this is not as elegant and prone to bugs because I'm storing None which could throw an exception if I accidentally use them wrong, and because I need to think about edge cases that the map lets me avoid.
It's true the dictionary won't be as efficient, but as others have commented, small differences in speed are not always worth significant maintenance hazards.
Answered by: Blake428 | Posted: 01-03-2022Answer 10
From what I understand, Python lists are already quite similar to ArrayLists. But if you want to tweak those parameters I found this post on the Internet that may be interesting (basically, just create your own ScalableList
extension):
http://mail.python.org/pipermail/python-list/2000-May/035082.html
Answered by: Clark564 | Posted: 01-03-2022Similar questions
python & zeroMQ -- Capacity to Handle Messages of Large Size?
I'd like to use python to build an app that is similar to zeroMQ's ventilator / sink scheme
Suppose that we have 10 Workers, all running on the same multi-core server.
Let's say that every 2[sec] or so, each Worker pushes to the Sink
How to grow a list to fit a given capacity in Python
I'm a Python newbie. I have a series of objects that need to be inserted at specific indices of a list, but they come out of order, so I can't just append them. How can I grow the list whenever necessary to avoid IndexErrors?
def set(index, item):
if len(nodes) <= index:
# Grow list to index+1
nodes[index] = item
I know you can create a list with an initial capacity via ...
Can you get the capacity of a list in Python?
Lists internally have some sort of capacity that changes depending on how many items we have in it. Is there such a way to access/read the capacity of python's list?
python - Ising model in 2d wrong heat capacity
The problem.
I'm trying to make a metropolis simulation of the 2D Ising model. My code reproduces some of the expected behaviour: i.e. there's a critical transition at vaguely the right temperature, but:
Energy has a smoothed-out transition. Magnetisation is a sharp step down from 1 to 0, while energy continues to rise, and doesn't show any change of gradient.
The heat capacity shows a peak, ...
python - How to confirm that my GPU is running at full capacity? The GPU load is so low
My environment: windows10, tensorflow-gpu, single Nvidia RTX2070 (8GB memory), using tflearn to run AlexNet with the OxFlowers17 data set, batch_size=128.
I got this information:enter image description here
This means the GPU is running, right?
However, when I check the GPU usage with GPU-Z, the GPU load is low (The code is doing training, ...
python - How to ensure that my code use the whole capacity of the GPU?
I am training a ResNet-50 network on a large database. When checking the percentage of use of my GPU, I found it varying just between 0% and 4%! although I am using tensorflow-GPU.
Here is my CPU and GPU rate of use:
When I run these two command lines:
python - Check if keyword exists in any capacity in string
urgency = ["asap", "now", "quickly"]
test = 'the patientneeds help/asap'
if any(elem in test.split() for elem in urgency):
print('Element Found')
When I run this, I was expecting the keyword "asap" to be found, but unfortunately, this seems to only do exact matches.
python - Gurobi - Solving Capacity Location Problem
I am trying to solve a Capacity location Problem from the OR database named: capa
OR files here
The answer should be:
capa 19240822.449 (capacity 8000)
But I found:
Best objective 3.145815023928e+08
For sure it is a problem on my code... Can someone help me...
python - simpy store capacity left
I am trying to find out how can I know how many items are currently stored in a buffer using Simpy. In my situation there is a machine that produces items and next they are stored in a buffer with limited capacity (2). So if the buffer is full, the machine should stop working.
global t1
global t2
with machine.request() as req:
yield req
if len(buffer.get_queue) >= 1:
y...
Still can't find your answer? Check out these communities...
PySlackers | Full Stack Python | NHS Python | Pythonist Cafe | Hacker Earth | Discord Python