Skip to content

polycrystal crashes when creating really huge sample (8 millions of atoms, 8 grains) #58

@jfikar

Description

@jfikar

Dear Pierre,

I think, I'm hitting some limit, when producing very large polycrystal:

atomsk --polycrystal W.xsf tmp.txt poly.cfg -wrap
 ___________________________________________________
|              ___________                          |
|     o---o    A T O M S K                          |
|    o---o|    Version master-2024-05-17            |
|    |   |o    (C) 2010 Pierre Hirel                |
|    o---o     https://atomsk.univ-lille.fr         |
|___________________________________________________|
>>> Constructing a polycrystal using Voronoi method.
>>> Opening the input file: W.xsf
..> Input file was read successfully (2 atoms).
>>> Reading parameters for Voronoi construction from: tmp.txt
..> File was successfully read.
..> Number of grains to be constructed: 8
..> Using a 3-D Voronoi tesselation.

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
#0  0x7efe527ba842 in ???
#1  0x7efe527b99d5 in ???
#2  0x7efe524776ef in ???
#3  0x64fb99 in ???
#4  0x664ec1 in ???
#5  0x7a24f4 in ???
#6  0x40256c in ???
#7  0x7efe5246258f in ???
#8  0x7efe5246263f in ???
#9  0x4025a4 in ???
#10  0xffffffffffffffff in ???
Segmentation fault (core dumped)

W.xsf is atomsk --create bcc 3.16519556286392 W xsf and tpm.txt is:

box 214.914 186.121 3200          
node 0.25*box 0.00*box 0.25*box         136.975847 8.366605 -116.116910
node 0.25*box 0.00*box 0.75*box         167.013630 -54.780334 18.100615
node 0.75*box 0.00*box 0.25*box         152.994318 5.012912 147.076081
node 0.75*box 0.00*box 0.75*box         43.115810 -13.310758 141.873019
node 0.50*box 0.50*box 0.25*box         117.138918 -78.712257 65.991134
node 0.50*box 0.50*box 0.75*box         131.501305 -24.487653 109.780401
node 0.00*box 0.50*box 0.25*box         52.894277 -3.522170 125.605655
node 0.00*box 0.50*box 0.75*box         30.053892 -2.480763 131.306464

The half as tall sample worked well, i.e. box 214.914 186.121 1600 on a computer with 128GB RAM and it has 4 millions of atoms. The bigger sample is thus expected to be 8 millions of atoms. I tried the huge sample also on a computer with 256GB RAM, but it seems to allocate at most approx. 124GB and then it crashes.

Might be some internal limitation?

Metadata

Metadata

Assignees

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions