<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://www.lptms.universite-paris-saclay.fr//wiki/index.php?action=history&amp;feed=atom&amp;title=Reading_a_large_data_file_%28efficiently%29</id>
	<title>Reading a large data file (efficiently) - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://www.lptms.universite-paris-saclay.fr//wiki/index.php?action=history&amp;feed=atom&amp;title=Reading_a_large_data_file_%28efficiently%29"/>
	<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wiki/index.php?title=Reading_a_large_data_file_(efficiently)&amp;action=history"/>
	<updated>2026-05-11T17:12:07Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wiki/index.php?title=Reading_a_large_data_file_(efficiently)&amp;diff=407&amp;oldid=prev</id>
		<title>Landes: j&#039;ai expliqué comment utiliser with open pour obtenir toute une matrice de donnees, mais de facon rapide. Et aussi un mot sur l&#039;utilisation du module &quot;subprocess&quot;</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wiki/index.php?title=Reading_a_large_data_file_(efficiently)&amp;diff=407&amp;oldid=prev"/>
		<updated>2014-02-16T18:23:52Z</updated>

		<summary type="html">&lt;p&gt;j&amp;#039;ai expliqué comment utiliser with open pour obtenir toute une matrice de donnees, mais de facon rapide. Et aussi un mot sur l&amp;#039;utilisation du module &amp;quot;subprocess&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Reading large data files can quickly become a trouble.&lt;br /&gt;
&lt;br /&gt;
np.loadtxt(&amp;#039;filename&amp;#039;) allows an easy conversion of the file to an array, but it is unpractical, in particular if your data file size exceeds your RAM.&lt;br /&gt;
&lt;br /&gt;
An other, much more efficient way (with the appropriate buffers, etc. handled by python) is using &amp;quot;with open(...) as file&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import numpy as np&lt;br /&gt;
&lt;br /&gt;
N=10000000&lt;br /&gt;
bigMatrix = np.zeros((N, 12))      # same shape as the expected data. Here, we have 12 columns. &lt;br /&gt;
                                   # With this N , &amp;quot;bigMatrix&amp;quot; is more or less 1 GB large.&lt;br /&gt;
iteration = 0&lt;br /&gt;
with open(filename, &amp;#039;r&amp;#039;) as f:    # this is an efficient way of handling the file.&lt;br /&gt;
    for line in f:&lt;br /&gt;
        bigMatrix[iteration] = np.fromstring(line, sep=&amp;#039; &amp;#039;)  # if the column separator is a space &amp;quot; &amp;quot;. Adapt otherwise.&lt;br /&gt;
        iteration +=1&lt;br /&gt;
        if iteration &amp;gt;= N:  # in order not to exceed the matrix size, if the data is longer than N.&lt;br /&gt;
            break&lt;br /&gt;
bigMatrix =  bigMatrix[:iteration, :]     # in order not to have leftover zeros, if the data is shorter than N.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the only limitation is that you need to specify a shape (esp. the column number) in advance, but usually if you want to analyze many files with some format that you invented, this should not be a problem.&lt;br /&gt;
&lt;br /&gt;
A possible way to circumvent the problem of choosing N in advance is to run something like &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import subprocess&lt;br /&gt;
output_string = subprocess.check_output([&amp;#039;wc -l my_data_file_name.dat&amp;#039;], shell=True)&lt;br /&gt;
number_of_lines_in_file = np.fromstring(output_string, sep=&amp;#039; &amp;#039;)[0]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and then use the resulting line count as N.&lt;/div&gt;</summary>
		<author><name>Landes</name></author>
	</entry>
</feed>