I have to take the Fourier transform of quite a large 2D array (2^15 by 2^15, but I'll use 2^10 as an example here). My array is mostly zeros, and since Fourier
can take sparse arrays, I can just initialize my array as a SparseArray
and pass it through Fourier
. The problem is that once I Fourier transform the array, it's no longer a sparse array and becomes much more memory-intensive:
arrayIndices = DeleteDuplicates@RandomInteger[{1, 2^10}, {100, 2}]; sparse = SparseArray[# -> 1 & /@ arrayIndices, {2^10, 2^10}]; fft = Fourier[sparse]; ByteCount /@ {sparse, fft}
{21400, 16777376}
I feel that there should be a work-around because I don't actually need a Fourier transform of the same resolution as my original data. So, what I would like to do is feed in an input sparse array of size 2^10 by 2^10 and get out a more course-grained Fourier transform, say of size 2^8 by 2^8.
I know that Fourier
allows you to specify which entries of the discrete Fourier transform that you want to extract, but this amounts to computing the full FFT and then throwing away some of the data. What I would like is to get a coarse-grained discrete Fourier transform in one shot so that I avoid making the computation unnecessarily memory intensive.
sparse
I observe that most rows and most columns are entirely equal to0
. Is that really representative of your use case?$\endgroup$ArrayResample
? It is difficult to understand if your spatial sampling rate is too high or if there are other issues (like too many zeroes that Henrik mentioned) unless there is a bit more on the physics of the problem$\endgroup$