You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed via email, here's the PR you suggested.
Additionally, I'd like to propose limiting the camera's field of view (FoV) to a displaced subset of the SLM space. The motivation is that when we remove the zeroth diffraction order from the SLM, the actual imaged FoV becomes smaller than the full SLM FoV.
Displacement is already supported via the offset argument in SimulatedCamera.build_affine.
I'd appreciate your feedback on how we want to implement the FoV limitation—so far, your assumption has been that the camera FoV is at least as large as the SLM's? Let me know how you'd prefer to approach this.
Would you prefer that I check for M is None and b is None in SimulatedCamera.__init__ and set a flag, e.g., self._farfield = True, indicating the "SLM farfield mode" that is then used in both build_affine and set_affine
or should I handle this check only inside set_affine?
Just want to make sure I align with your intended design.
Re: FoV limitation. I think a good way to do this is to implement window of interest (WOI, also called ROI) for SimulatedCamera. That way, the user would input the desired cropped region of the FOV in units of the full (probably padded) FOV. WOI support is not super comprehensive right now in slmsuite #33 , but the set_woi() function is at least standardized in the template. Does this sound like a reasonable solution?
Re: self._farfield = True. I think the self._interpolate already covers this, where when self._interpolate = False, the SimulatedCamera should be equivalent to the farfield of the SLM.
Re: FoV limitation. I think a good way to do this is to implement window of interest (WOI, also called ROI) for SimulatedCamera. That way, the user would input the desired cropped region of the FOV in units of the full (probably padded) FOV. WOI support is not super comprehensive right now in slmsuite #33 , but the set_woi() function is at least standardized in the template. Does this sound like a reasonable solution?
Re: self._farfield = True. I think the self._interpolate already covers this, where when self._interpolate = False, the SimulatedCamera should be equivalent to the farfield of the SLM.
Yes, my plan was to use set_woi.
1 What do you think about updating Camera.set_woi to set cam._woi which we then use in Camera.get_image to crop the image by indexing? Or would you rather only implement such behavior in SimulatedCamera for now?
2 Does Hologram.plot_farfield "detect" the field of view if we use set_woi?
A main reason that we have been procrastinating on WOI stuff is that it messes up the calibrations when the WOI changes (e.g. the b value in Fourier calibration). Thus, for now let's keep it inside SimulatedCamera, but eventually it would be a superclass Camera method (yes, a default method that crops via indexing, overwritten by cameras that support WOI restriction to instead do this on hardware).
plot_farfield() plots the full extent of the computational Fourier space, but then also plots the outline of the Camera's FOV in yellow. The current method has no knowledge of WOI. In the future, when WOI stuff is probably implemented, we'll probably draw the full possible extent of the camera with a dashed yellow line and the current WOI-cropped extent of the camera with the solid yellow line.
I'm debating some more major structural changes to Camera and SimulatedCamera before 0.3.0 as part of a small examples revamp (esp. computational_holography). Some form of this will probably land there (though without the WOI stuff). I'll update you tomorrow.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Hi Ian,
As discussed via email, here's the PR you suggested.
Additionally, I'd like to propose limiting the camera's field of view (FoV) to a displaced subset of the SLM space. The motivation is that when we remove the zeroth diffraction order from the SLM, the actual imaged FoV becomes smaller than the full SLM FoV.
Displacement is already supported via the offset argument in
SimulatedCamera.build_affine.I'd appreciate your feedback on how we want to implement the FoV limitation—so far, your assumption has been that the camera FoV is at least as large as the SLM's? Let me know how you'd prefer to approach this.
Best regards,
Bodo