-
Notifications
You must be signed in to change notification settings - Fork 57
Perform histogramming for density-plots in equi-angle cubed-sphere co… #588
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Perform histogramming for density-plots in equi-angle cubed-sphere co… #588
Conversation
|
Thank you for opening this PR, @MACarlsen! Looking forward to try it out. |
|
I think this is as far as I can get. I did break some things in order to fix the problem: The histograms are computed on six separate grids, so to avoid having to touch the plotting code too much in the PR, I am interpolating back to a polar-grid and passing that to the plotting functions. There are some assumptions in the existing tests, that rely on the plot-grid to be equal area, which it is not in the current version. Also, it is not clear from the documentation what the expected behaviour should be in the case But since the bins in the equal-area polar grid are not useful, I don't see the value of this. If there is some need to know the counts per arbitrary bin, I would suggest sepparating that functionality from plotting. But of course this is not my package. |
|
Hey @MACarlsen , working through this now. Two questions:
import matplotlib.pyplot as plt
from orix import data, plot
from orix.vector import Miller
plt.close('all')
xmap = data.sdss_ferrite_austenite(allow_download=True)
pg_m3m = xmap.phases[1].point_group.laue
O_au = xmap["austenite"].orientations
O_fe = xmap["ferrite"].orientations
# Orientation colors
ckey_m3m = plot.IPFColorKeyTSL(pg_m3m)
rgb_au = ckey_m3m.orientation2color(O_au)
rgb_fe = ckey_m3m.orientation2color(O_fe)
# Austenite <111> poles in the sample reference frame
t_au = Miller(uvw=[1, 1, 1], phase=xmap.phases["austenite"])
t_au_all = O_au.inv().outer(t_au)
# Ferrite <111> poles in the sample reference frame
t_fe = Miller(uvw=[1, 1, 1], phase=xmap.phases["ferrite"])
t_fe_all = O_fe.inv().outer(t_fe)
fig, ax = plt.subplots(2, 2, subplot_kw={"projection": "ipf", "symmetry": pg_m3m})
ax[0, 0].scatter(t_au_all, s=2, c=rgb_au.T)
ax[0, 1].scatter(t_fe_all, s=2, c=rgb_fe.T)
ax[1, 0].pole_density_function(t_au_all)
ax[1, 1].pole_density_function(t_fe_all)
fig.subplots_adjust(hspace=0, wspace=0.1)
plt.tight_layout()
|
import matplotlib.pyplot as plt
import time
from orix import data, plot
### Side quest to get plotting to work ###
from orix.plot import StereographicPlot, InversePoleFigurePlot
import matplotlib.projections as mprojections
mprojections.register_projection(StereographicPlot)
mprojections.register_projection(InversePoleFigurePlot)
### Side quest to get plotting to work ###
from orix.vector import Vector3d
xmap = data.sdss_ferrite_austenite(allow_download=True)
print(xmap)
# Extract orientations, O
pg_m3m = xmap.phases[1].point_group.laue
O_fe = xmap["ferrite"].orientations
O_au = xmap["austenite"].orientations
# Some sample direction, v
v = Vector3d([0, 0, 1])
v_title = "Z"
# Rotate sample direction v into every crystal orientation O
t_fe = O_fe * v
t_au = O_au * v
# Set IPDF range
vmin, vmax = (0, 3)
subplot_kw = {"projection": "ipf", "symmetry": pg_m3m, "direction": v}
fig = plt.figure(figsize=(9, 8))
ax0 = fig.add_subplot(221, **subplot_kw)
ax0.scatter(O_fe, alpha=0.05)
_ = ax0.set_title(f"Ferrite, {v_title}")
ax1 = fig.add_subplot(222, **subplot_kw)
ax1.scatter(O_au, alpha=0.05)
_ = ax1.set_title(f"Austenite, {v_title}")
t1 = time.time()
ax2 = fig.add_subplot(223, **subplot_kw)
ax2.pole_density_function(t_fe, vmin=vmin, vmax=vmax)
_ = ax2.set_title(f"Ferrite, {v_title}")
print(time.time()-t1)
t1 = time.time()
ax3 = fig.add_subplot(224, **subplot_kw)
ax3.pole_density_function(t_au, vmin=vmin, vmax=vmax)
_ = ax3.set_title(f"Austenite, {v_title}")
print(time.time()-t1)
plt.show() |
|
Ope, apologies. The way permissions are set up, you need to do the merge or rebase yourself. You can do it in most git GUI's or via command line, but there should also be an option on your github page right below the green download option. |
…ordinates to avoid unpleaseant artifacts from to highly isotropic bins in current version. Signed-off-by: MACarlsen <[email protected]>
…ters and vertexes
d884162 to
c9afd08
Compare
|
@argerlt Should be rebased now :) You should have permissions to push to the branch but it is a whole thing.... GitHub changed how they do authentication and quite honestly it is a mess. How do you usually Push/Pull to Github? I had a similar issue a while back pyxem/pyxem#1051 I generally use Pycharm and their integrated VCS stuff. Anyways normal define a remote --> check out branch doesn't work because it doesn't have the right permissions. Signing in within the "settings" and then using the Pull requests on the side bar and checking out through that does. This is all to say it's terribly confusing. The GitHub desktop App also has the right permissions to push to other peoples branches. I struggled with this for such a long time and really didn't get anywhere. But anyways. |
|
@MACarlsen Not sure how interested you are in learning more about git. There are a lot of tools but other than the commit and Push / Pull:
I'll also note that I'm an avid user of the git squash command which I absolutely love (for my own work). I use that a bunch if I happen to make a mistake, push it and then have to make another commit to fix something. It helps clean up your git history. I think Linus is a little too strict in most cases (I like people to make PR's way before they are merged) but I always reference this as a good starting point for understanding what to do (and not to do) with your git history. https://www.mail-archive.com/[email protected]/msg39091.html Basically don't be afraid to squash commits, force push and go wild etc on your own branch. If you ever get to the point where you get write access to a public repo like orix then a lot of the stuff about protecting git history is more important. |
|
Good find! It seems that when it projects the plot-grid into the fundamental zone, the points that are exactly opposite can be assigned either to the left-edge or the right edge more or less randomly, which gives these quadrilaterals in pycolormesh that stetch over the whole region. I'm not quite sure why they don't get masked out though. But anyways, we can actually simplify it a bit. Because I am doing explicit symmetrization (summing over symmetry elements) instead of folding back into the fundamental zone, as was done before. This means however that the maps for cubic will get "ragged edges" at low resolution: For other point groups it's not an issue since the boundaries are along graticules ... or I guess icosahedral will also have this. |
|
@argerlt I thought |
argerlt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This functionality is a great upgrade, the code itself just needs some work to make it more readable/concise/reusable. I left some suggestions, feel free to comment if you disagree.
| # If user explicitly passes symmetry=None | ||
| if symmetry is None: | ||
| symmetry = C1 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes passing None explicitly mean nothing. (ie, someone changes the default C1 to None, this overwrites their parameter change). the better type hint friendly way to do this is to keep the default as None and remove this code block.
| if resolution < 0.2 * sigma and resolution < 1.0: | ||
| histogram_resolution = 0.2 * sigma | ||
| plot_resolution = resolution | ||
| else: | ||
| histogram_resolution = resolution | ||
| plot_resolution = resolution |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
silent overwriting of parameters like this is generally bad, even if potentially convenient. it makes too many assumptions about the hardware used, the use case of the user, etc.
In this case, I think the better solution is to change the resolution default to 0.5 degrees in inverse_pole_figure_plot.pole_density_function
| symmetry: Symmetry | None = None, | ||
| symmetry: Symmetry | None = C1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this makes more sense and better matches the rest of ORIX the way it was. see comment below for details
| if v.size == 0: | ||
| raise ValueError | ||
|
|
||
| if np.any(v.norm) == 0.0: | ||
| raise ValueError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you include an error, you should also include a message describing the error (see above on line 111 for an example)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I wanted to ask what the norm is. I saw some other functions (like Vector3D.unit()) just fail silently if you give them the zero-vector, so I figures I sould ahve to re-write this anyways.
| resolution, | ||
| # Smoothing | ||
| if sigma != 0.0: | ||
| # If smoothing is only a bit, du 60 small steps, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, this is making assumptions about people's applications and adding silent overwrites. If you think something will run slow with the current defaults, it's better to just change default values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be better, I agree.
| t = 1 / 3 | ||
| N = int(1 / t * (sigma / histogram_resolution) ** 2) | ||
|
|
||
| hist = _smooth_gnom_cube_histograms(hist, t, N) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely an upgrade from the existing method, but Is there a way we can do the blurring still using scipy's gaussian filter as opposed to a diffusion approach? I tried the following, which is fast but doesn't quite work right:
apply gaussian blur to each face, then stich accross bounds
x_wrap = np.hstack([hist[2], hist[4, ::-1], hist[3], hist[5, ::-1]])
x_blur = gaussian_filter(x_wrap, sigma=[0, sigma / resolution], mode="wrap")
x_stack = x_blur.reshape(sz, 4, sz)
hist[2] = x_stack[:, 0, :]
hist[4] = x_stack[::-1, 1, :]
hist[3] = x_stack[:, 2, :]
hist[5] = x_stack[::-1, 3, :]
y_wrap = np.hstack([hist[0].T, hist[4].T, hist[1].T, hist[5, ::-1, ::-1].T])
y_blur = gaussian_filter(y_wrap, sigma=[0, sigma / resolution], mode="wrap")
y_stack = y_blur.reshape(sz, 4, sz)
hist[0] = y_stack[:, 0, :].T
hist[4] = y_stack[:, 1, :].T
hist[1] = y_stack[:, 2, :].T
hist[5] = y_stack[::-1, 3, ::-1].T
z_wrap = np.hstack(
[hist[0], hist[2, :, ::-1].T, hist[1, ::-1, ::-1], hist[3, ::-1].T]
)
z_blur = gaussian_filter(z_wrap, sigma=[0, sigma / resolution], mode="wrap")
z_stack = z_blur.reshape(sz, 4, sz)
hist[0] = z_stack[:, 0, :]
hist[2] = z_stack[::-1, 1, :].T
hist[1] = z_stack[::-1, 2, ::-1]
hist[3] = z_stack[:, 3, ::-1].T
basically doing 1d gaussian blurs around the x, y, and z axes individually, turning this:
This almost works, but creates a tearing effect on whichever axis is the last to be blurred on:

There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried a few different things, but it's difficult to get the corners right.
| def sample_S2_equiangle_cube_mesh_vertexes( | ||
| resolution: float, grid_type: str = "spherified_corner" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe sample_uv_mesh already provides this capability.
| face_coordinate_1 = face_coordinates[0].ravel() | ||
| face_coordinate_2 = face_coordinates[1].ravel() | ||
| for face_index in range(6): | ||
| this_face = face_index_array == face_index |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming like this face and that face is generally best to avoid, as it should to make sense to anyone looking at this code a year from now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I obviously agree that having this_face and that_face would be a problem. I don't think fi and mask are more explicit.
| return hist, (x, y) | ||
|
|
||
|
|
||
| def _cube_gnom_coordinates( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should try to replace this function with a considerably more concise one. Additionally, if you can make the face assignments part of a procedural for loop, it's generally much more desirable as it's less prone to creating an error later on, and is also much easier for future editors to comprehend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
```
face_center_vecs = face_rotations.inv() * (Vector3d.zvector())
face_index = np.argmax(face_center_vecs.dot_outer(v), 0)
```
Makes six copies of the full dataset. For large datasets that is a significant overhead, compared to the floating point comparisons.
| return Vector3d(data=data) | ||
|
|
||
|
|
||
| def sample_S2_equiangle_cube_mesh_face_centers( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you point out I believe this capability already exists sample_S2_cube_mesh. What you are doing here is the equivalent of setting grid_type="normalized"
It looks like specifically what you want here is to avoid getting edges and corners, which would be a good optional input to add to the sample_S2_cube_mesh. Within the function, you could add an if/then to cut the first and last entry off of the grid_on_edge variable.
As a general rule, for a big project like ORIX, it's generally better to reuse as much code as possible to avoid errors down the road. I nothing else, it helps ensure identical calculations are performed identically in different functions throughout ORIX.
For example, some day someone might take your PDF function, compare the calculated centers to vectors calculated with the same resolution via sample_S2_cube_mesh, and not understanding why the values are different or in a different order.
|
@MACarlsen, I left a review of this PR, but as part of running through and understanding what you did, I rewrote a more concise and (I believe?) equivalent implementation. Let me know what you think, or if I've left out a crucial component. It's also slower, but I think that's only because I removed the silent down-sampling logic. |
MACarlsen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I said before, feel free to take it over. I'm not sure exactly what buttons you want me to press or not in here.
The solution is definietely not perfect, but I think it makes the PDFs at least usable.






Resolves #441 by performing the histogramming in six separate charts, corresponding to six cube faces, in gnomonic equal-angle coordinates. This avoids the coordinate divergence at the pole, which is unavoidable in any single-chart map of the sphere.
Smoothing is handled by diffusion on the cube surfaces and across edges.
I might interpolate the smoothed histograms onto an longitude-latitude map to avoid down-stream changes. Remains to be seen.