Drift correction sub-pixel resolution

Hi all, 

I want to drift an image using sub pixel resolution. I recently realised that 

 ImageTransform/IOFF={dx, dy} offset wave2d

can only translate an image using trunc(dx), trunc(dy) (Hope I am not wrong here).

I now use:

ImageInterpolate/APRM={1,0,dx,0,1,dy,1,0} Affine2D wave2d

I want to apply the operation on image stacks with many layers (different dx, dy for each layer). Is the operation choice optimal?

Do you have any suggestion?

Thank you.

ImageInterpolate seems like a good option to me. It is too slow for you?

Purely for visualization purposes you can of course use SetScale, but I assume you want to add, subtract, multiply etc... images with each other.

In reply to by olelytken

Hi olelytken,

The function is part of an operation that drift corrects a stack of 200-500 images (4K).

Until recently, I was using " ImageTransform/IOFF={dx, dy} offset" assuming (incorrectly) that we can translate by non-integer values.

ImageInterpolation has similar performance with ImageTransform on stacks with many layers. I was wondering if there is another way I cannot think of at the moment and might give some performance gain.

Cheers,

eg

Historically, ImageTransform offset was not designed for correcting images and its documentation speaks of "pixels" that were assumed to be integers.

ImageInterpolate with the keyword Resample was specifically written for correcting images.  It is thread-safe but it is not automatically multithreaded.  This could allow you to split your task and run under multiple user threads.

ImageInterpolate with the keyword Affine2D is automatically multithreaded (in IP9) and should have better performance than Resample.

A.G.

In reply to by Igor

Hi A.G.,

I changed my program and I now use the Affine2D method. 

I copy part of the code here:

    MatrixOP/O M_Affine = layer(partitionW3d, i) // Get the first layer from partitionW3d, name is M_Affine (ImageInterpolate default  output for Affine2D)
    for(i = 0; i < layers - 1; i++) // (layers - 1)
        MatrixOP/O/FREE targetLayer = layer(partitionW3d, i + 1)
        ImageRegistration/Q/TRNS={1,1,0}/ROT={0,0,0}/TSTM=0/BVAL=0 refwave = M_Affine, testwave = targetLayer // Correct!!!!:Error here. M_Affine differs by one!
        WAVE W_RegParams
        dx = W_RegParams[0]; dy = W_RegParams[1]
        ImageInterpolate/APRM={1,0,dx,0,1,dy,1,0}/DEST=M_Affine Affine2D targetLayer // Will overwrite M_Affine
        MatrixOP/O/FREE w3dLayer = layer(w3d, i + 1)
        ImageInterpolate/APRM={1,0,dx,0,1,dy,1,0}/DEST=$("getStacklayer_" + num2str(i + 1)) Affine2D w3dLayer
    endfor

I use a partition (partitionW3d) of the 3d wave (w3d)I want to align to get the drift correction. The partitionW3d is created be selecting the boundaries with the Marquee on the w3d.

In some cases though, the:

ImageInterpolate/APRM={1,0,dx,0,1,dy,1,0}/DEST=M_Affine Affine2D targetLayer

Gives output (M_Affine) which is one pixel smaller that the input wave, thus the:

ImageInterpolate/APRM={1,0,dx,0,1,dy,1,0}/DEST=M_Affine Affine2D targetLayer // Will overwrite M_Affine

pops an error: Wave length mismatch.

Is there a way to deal with it, besides if checks and re-interpolation of ill cases, or it's better to use Resample with the /RESL={nx, ny} flag to be safe?

ImageInterpolate/TRNS={scaleShift, dx, 1, dy, 1} /RESL={nx, ny} Resample targetLayer

Cheers,

eg

 

In these situations, it is a good idea to email support@wavemetrics.com an Igor experiment containing the relevant data and code.

ImageInterpolate with the Affine keyword computes the resulting image size from the transformation parameters.  This involves floating point multiplications and truncation to integers so it does not surprise me that in some circumstances the result may suffer from 1-pixel roundoffs.  There are a number of ways to adjust the other images to match the interpolation output.

It is not obvious to me why you choose to use ImageInterpolate to align an image using parameters derived from ImageRegistration when the latter operation computes the transformed image already.

 

A.G.

In reply to by Igor

Hi A.G,

thank you for your answer. I will e-mail the support with the code.

It is not obvious to me why you choose to use ImageInterpolate to align an image using parameters derived from ImageRegistration when the latter operation computes the transformed image already.

We have 2K images to process (soon 4K). We need to align stacks of 200-500 images and the operation takes really long time. Also, it does not perform very well most of the times as our spectra change contrast between layers and we have big differences in intensity. So, we isolate a specific feature on the image (a defect) in a rectangle of e.g  30 x 50 pixels, apply ImageRegistration there and use the dx, dy to drift the original image. If one feature does not give you good results we can try with another spot, now the procedure is fast.

All the best,

eg

> Also, it does not perform very well most of the times as our spectra change contrast between layers and we have big differences in intensity. 

Would some benefit be found when the intensity levels in the image stack are first appropriately normalized for layer-to-layer consistency in absolute intensity levels, dynamic range, and/or overall intensity flux?

Hi jjweimer,

Our stacks are XAS or XPEEM images over absorption edges or core levels. The contrast between layers reverses or vanishes in different regions across different stack ranges.

I have tried normalisation, dynamic range adjustment and lately histogram equalisation. Sometimes they help. Edge detection gives the best results but many times also fails. Unfortunately, there is no silver bullets to the problem.  

Other colleagues in our beamline use pattern matching and other packages of ImageJ. It is also hit and miss.