NumPy Memory & Performance

Last Updated: 09 Nov 2025


Memory efficiency and speed are critical in large-scale ML, robotics, and satellite data. NumPy uses views (not copies) whenever possible — save RAM, go fast.

Hinglish Tip: “View = sirf ek naya darwaza, data same jagah. Copy = poora naya ghar banao.”


1. Views vs Copies

OperationReturnsAffects Original?
SlicingViewYes
reshape, ravelViewYes
flatten, copy()CopyNo
import numpy as np

arr = np.array([1, 2, 3, 4])
view = arr[1:3]       # View
view[0] = 99
print(arr)            # [1 99 3 4] → changed!
copy = arr[1:3].copy()  # Copy
copy[0] = 88
print(arr)              # [1 99 3 4] → NOT changed

2. np.shares_memory() — Check if View

a = np.arange(10)
b = a[2:6]
print(np.shares_memory(a, b))   # True → view
c = a[2:6].copy()
print(np.shares_memory(a, c))   # False → copy

3. In-Place Operations (out=)

Reuse memory → zero allocation.

x = np.random.rand(1000, 1000)
y = np.random.rand(1000, 1000)

# Normal → new array
z = x + y

# In-place → reuse x
np.add(x, y, out=x)
print(np.shares_memory(x, z))   # True

Speed Test

import time

a = np.random.rand(1_000_000)
b = np.random.rand(1_000_000)

t = time.time()
c = a + b
print("Normal:", time.time() - t)

t = time.time()
np.add(a, b, out=a)
print("In-place:", time.time() - t)

In-place = 2–3× faster


4. Memory Layout: order='C' vs 'F'

  • 'C'row-major (default)
  • 'F'column-major (Fortran)
arr = np.array([[1, 2], [3, 4]], order='C')
print(arr.flags)
# C_CONTIGUOUS : True
# F_CONTIGUOUS : False

Use Case: Speed up matrix multiply with BLAS.


5. np.ascontiguousarray() — Force C-order

transposed = arr.T
print(transposed.flags.c_contiguous)   # False

c_order = np.ascontiguousarray(transposed)
print(c_order.flags.c_contiguous)      # True