Rotating a 2D image using change of coordinates and scipy interpolation

Issue

I’m trying to rotate my image but it is like my frame does not rotate at all.

Here are the following steps of my code:

1 – Create an image of an inclined disk.

2 – Apply the change of coordinate on x and y to get the new system of coordinates.

3 – Make a 2D interpolation at those coordinates.

Problem: The inclination part with the shrink of the x-axis using a cosine function works fine, but the rotation part of the frame doesn’t work at all. It looks like my frame is shriking even more.

import numpy as np
import matplotlib.pyplot as plt
from numpy import cos as cos
from numpy import sin as sin
import scipy
from scipy.interpolate import interp1d

def coord_change(qu, qv, inclination_tmp, ang_tmp):
    
    ang             = ang_tmp*np.pi/180
    inclination     = inclination_tmp*np.pi/180
    
    new_qu          = (qu*cos(ang)+qv*sin(ang))*cos(inclination)
    new_qv          = -qu*sin(ang)+qv*cos(ang)

    new_q = np.sqrt((new_qu)**2 + (new_qv)**2)

    return new_qu, new_qv, new_q    


def imaging_model(rho,I_profile, R_image, **kwargs):
    
    method        = kwargs.get('method', 'linear')    

    
    xv = np.linspace(-R_image, R_image, len(rho)*2, endpoint=False) # Taille de la fenetre

    interpol_index = interp1d(rho, I_profile,kind=method)    
    X, Y = np.meshgrid(xv, xv)
    profilegrid2 = np.zeros(X.shape, float)
    current_radius = np.sqrt(X**2 + Y**2)
    cond=np.logical_and(current_radius<=max(rho),current_radius>=min(rho)) # Min et max des données
    profilegrid2[cond] = interpol_index(current_radius[cond])    

    return xv, profilegrid2  



champ       = 80 # mas 
diam         = 10 # mas
nb_points    = 100 
rho          = np.linspace(0,champ/2,nb_points)

inclin = 20
angle  = 0

I_profile              = np.zeros(nb_points)
I_profile[rho<=diam/2] = 1


xv, image = imaging_model(rho, I_profile, champ/2)

##############################


x_mod, y_mod, x_tot_mod = coord_change(xv, xv, inclin, angle)

f_image = scipy.interpolate.interp2d(xv, xv, image)

image_mod = f_image(x_mod,y_mod)


plt.figure()
plt.imshow(image_mod,extent=[min(xv),max(xv),min(xv),max(xv)])

Little remark: I don’t want to use the function of rotation included in python, I would like to do it by hand.

UPDATE:

I’ve made the same code but separating the inclination and rotation process in two for better understanding and to better check where the problem might comes from:

import numpy as np
import matplotlib.pyplot as plt
from numpy import cos as cos
from numpy import sin as sin
import scipy
from scipy.interpolate import interp1d

def coord_rotation(qu, qv, ang_tmp):
    
    ang             = ang_tmp*np.pi/180
    
    new_qu          = qu*cos(ang)+qv*sin(ang)
    new_qv          = -qu*sin(ang)+qv*cos(ang)

    new_q = np.sqrt((new_qu)**2 + (new_qv)**2)

    return new_qu, new_qv, new_q    


def coord_inclination(qu, qv, inclination_tmp):
    
    inclination     = inclination_tmp*np.pi/180
    
    new_qu          = qu*cos(inclination)
    new_qv          = qv

    new_q = np.sqrt((new_qu)**2 + (new_qv)**2)

    return new_qu, new_qv, new_q    

def imaging_model(rho,I_profile, R_image, **kwargs):
    
    method        = kwargs.get('method', 'linear')    

    
    xv = np.linspace(-R_image, R_image, len(rho)*2, endpoint=False) # Taille de la fenetre

    interpol_index = interp1d(rho, I_profile,kind=method)    
    X, Y = np.meshgrid(xv, xv)
    profilegrid2 = np.zeros(X.shape, float)
    current_radius = np.sqrt(X**2 + Y**2)
    cond=np.logical_and(current_radius<=max(rho),current_radius>=min(rho)) # Min et max des données
    profilegrid2[cond] = interpol_index(current_radius[cond])    

    return xv, profilegrid2  


champ       = 80 # mas 
diam         = 10 # mas
nb_points    = 100 
rho          = np.linspace(0,champ/2,nb_points)

inclin = 30
angle  = 40

I_profile              = np.zeros(nb_points)
I_profile[rho<=diam/2] = 1


xv, image = imaging_model(rho, I_profile, champ/2)

##############################

x_inc, y_inc, x_tot_inc = coord_inclination(xv, xv, inclin)
f_inc = scipy.interpolate.interp2d(xv, xv, image)
image_inc = f_inc(x_inc,y_inc)


x_rot, y_rot, x_tot_rot = coord_rotation(xv, xv, angle)
f_rot = scipy.interpolate.interp2d(xv, xv, image_inc)
image_rot = f_rot(x_rot,x_rot)


plt.figure()
plt.imshow(image, extent=[min(xv),max(xv),min(xv),max(xv)])
plt.title('Original')


plt.figure()
plt.imshow(image_inc, extent=[min(xv),max(xv),min(xv),max(xv)])
plt.title('After inclination')

plt.figure()
plt.imshow(image_rot , extent=[min(xv),max(xv),min(xv),max(xv)])
plt.title('After inclination + rotation')

Normal picture: OK

enter image description here

After Inclination by an angle of 30°: OK

enter image description here

After Inclination by an angle of 30° AND Rotation by an angle of 40°: PROBLEM

enter image description here

Solution

There are several issues here. In the example, I’ve reduced the resolution and transformation angles a little, to make the issues easier to spot:

champ       = 80 # mas 
diam         = 40 # mas
nb_points    = 20 
rho          = np.linspace(0,champ/2,nb_points)

inclin = 50
angle  = 30

Issue 1: mesh vs. coordinate-aligned grid

xv in your code is a 1D array, and the data locations are at any combination of each element of xv with any other element — a regular, rectangular grid, aligned with the coordinate axes.
During inclination transforming, x_inc and y_inc are also 1D arrays, and data is moved to every combination of any element of x_inc with any element of y_inc. This works correctly because the transformation only affects the y axis of your image. this means that for every value of y, the x coordinate is still constant, and for each value of x, the y coordinate is constant.

Rotation, however, breaks that rule. This requires you to use a full-factorial grid of data.

This means instead of handing xv, xv to the transformation function, you need to give it a full factorial mesh, created e.g. by np.meshgrid. The following shows what your code is actually doing, and how it interprets the data (reduced resolution to 20 to make data points discernible):

fig1, ax1 = plt.subplots()

ax1.set_aspect('equal')
ax1.scatter(x_inc, y_inc, marker='.', c='b', s=2)
x_inc_full, y_inc_full = np.meshgrid(x_inc, y_inc)
ax1.scatter(x_inc_full, y_inc_full, marker='.', c='b', s=0.1)

ax1.scatter(x_rot, y_rot, marker='.', c='r', s=2)
x_rot_full, y_rot_full = np.meshgrid(x_rot, y_rot)
ax1.scatter(x_rot_full, y_rot_full, marker='.', c='r', s=0.1)

Result:

blue dots: "inclined" coordinates, red: "rotated". thick dots are the output of the current code, thin ones are how pyplot interprets them. These are the transformations as currently applied

You can see the code takes the diagonal input line and stretches/rotates it, then re-interprets it as a mesh.

If we instead perform the transformations on a full mesh, things start looking more sensible:

# again, but doing the transformations on the mesh
fig2, ax2 = plt.subplots()
ax2.set_aspect('equal')

x_orig, y_orig = np.meshgrid(xv, xv)
x_incf, y_incf, xtot_inc = coord_inclination(x_orig, y_orig, inclin)
ax2.scatter(x_incf, y_incf, marker='.', c='b', s=0.2)

x_rotf, y_rotf, xtot_rot = coord_rotation(x_orig, y_orig, angle)
ax2.scatter(x_rotf, y_rotf, marker='.', c='r', s=0.2)

Result:

blue: "inclined" mesh, red: rotated mesh

Now, that’s better!

Except…

Issue 2: the transformations don’t do what you think they do

If you look at the previous figure, you notice that the x-axis has been compressed, not the y axis, yet in your inclined figure the circle is wider than tall.

That’s because you are interpolating the figure onto the new grid, but then plotting the interpolated figure, on a regular grid.

This means what you’re seeing is actually the inverse of the transformation you probably meant to apply. The inclination transform condenses the x axis, but the resulting image is stretched along the x axis, by the same amount. That’s visible in your original pictures, but becomes obvious with the updated parameters I’m using:

original
after the original inclination transform

The solution would be to apply the transformation inversely: interpret the image as being on the transformed grid (which is what you want it to look like), then interpolate it onto the regular grid for display:

interp_incf = scipy.interpolate.interp2d(x_incf, y_incf, image)
image_incf = interp_incf(xv, xv)

except…

Issue 3: scipy doesn’t like interpolations onto or from grids that are not axis-parallel

The only way I’ve found to do interpolation involving arbitrary meshes is by running (ugh…) loops around all the point coordinates that we want to interpolate to:

# inclined
interp_orig = scipy.interpolate.interp2d(xv, xv, image)
image_incf = np.array([[interp_orig(x, y)  for x, y in zip(xline, yline)]\
                       for xline, yline in zip(x_incf, y_incf)]\
                      ).reshape(x_incf.shape)

However, the above still suffers from Issue number 2 (the transformation works the wrong way round). However, we can’t interpolate from an arbitrarily-deformed mesh. This means that the transformation itself needs to be inverted:

def inv_coord_rotation(qu, qv, ang_tmp):
    
    ang             = ang_tmp*np.pi/180
    
    inv_qu          = qu*cos(-ang)+qv*sin(-ang)
    inv_qv          = -qu*sin(-ang)+qv*cos(-ang)

    new_q = np.sqrt((inv_qu)**2 + (inv_qv)**2)

    return inv_qu, inv_qv, new_q    


def inv_coord_inclination(qu, qv, inclination_tmp):
    
    inclination     = inclination_tmp*np.pi/180
    
    inv_qu          = qu/cos(inclination)
    inv_qv          = qv

    inv_q = np.sqrt((inv_qu)**2 + (inv_qv)**2)

    return inv_qu, inv_qv, inv_q

…and now we can finally do what we came here to do:

# and now for the pictures, with interpolation applied in correct direction:
fig3, axes = plt.subplots(ncols=3)

# interpolator for the original image:
interp_orig = scipy.interpolate.interp2d(xv, xv, image)

# inclined
interp_orig = scipy.interpolate.interp2d(xv, xv, image)
image_incf = np.array([[interp_orig(x, y)  for x, y in zip(xline, yline)]\
                       for xline, yline in zip(x_incfi, y_incfi)]\
                      ).reshape(x_incfi.shape)
axes[0].imshow(image_incf, extent=[min(xv),max(xv),min(xv),max(xv)])
# should be identical to the original inclined version, image_inc


# rotated version of the inclined image
interp_incf = scipy.interpolate.interp2d(xv, xv, image_incf)
x_rotfi, y_rotfi, xtot_rotfi = inv_coord_rotation(x_orig, y_orig, angle)
image_rotf = np.array([[interp_incf(x, y)  for x, y in zip(xline, yline)]\
                       for xline, yline in zip(x_rotfi, y_rotfi)]\
                      ).reshape(x_rotfi.shape)
axes[1].imshow(image_rotf, extent=[min(xv),max(xv),min(xv),max(xv)])

transformation of the image, as prescribed

however

… I just tried combining the two transformations, to avoid interpolating an already-interpolated image, and it’s not trivial, because it would require you to apply the "inclination" transform in the rotated space, and then things get weird.

It’s probably possible to use implement a function which takes the non-inverted transformed pixel coordinates and works out the inverse transformation from there, but really, at this point we’re just working against scipy.interpolate, similar to my own odyssey with it. I think the general recommendation if you’re dealing with images is to use Pillow, but that would be outside of my own experience.

Answered By – Zak

Answer Checked By – David Goodson (AngularFixing Volunteer)

Leave a Reply

Your email address will not be published.