pySIPFENN Tests

Core pySIPFENN Functionalities

class TestCore(methodName='runTest')[source]

Bases: TestCase

Test the core functionality of the Calculator object and other high-level API functions. It does not test the correctness of the descriptor generation functions or models, as these are delegated to other tests.

detectModels()[source]

Test that the updateModelAvailability() method works without errors and returns a list of available models.

setUp()[source]

Initialise the Calculator object for testing. It will be used in all tests and is not modified in any way by them.

testDestroy()[source]

Test that the Calculator can deallocate itself (incl. loaded models and its data).

testDownloadAndLoadModels()[source]

Tests that the downloadModels() method works without errors in a case whwere the models are not already downloaded and loads them correctly using the loadModels() method. Then also load a model explicitly using loadModel() and check that it is in the loadedModels list. Also check that error is raised correctly if a non-available model is requested to be loaded.

testFromPOSCAR_KS2022()[source]

Update the list of available models and identifies which models are compatible with the KS2022 descriptor. Then it runs featurization from the exampleInputFiles directory. It also tests the printout of the Calculator object after the prediction run.

testFromPOSCAR_Ward2017()[source]

Update the list of available models and identifies which models are compatible with the Ward2017 descriptor. Then it runs featurization from the exampleInputFiles directory.

testFromPrototypes_KS2022_randomSolution()[source]

Quick runtime test of the top level API for random solution structures. It does not test the accuracy, as that is delegated elsewhere.

testFromStructure_KS2022_dilute()[source]

Update the list of available models and identifies which models are compatible with the KS2022_dilute featurization (KS2022 descriptor). Then it runs featurization from the exampleInputFiles directory. It also then checks that the ‘pure’ convenience magic works correctly by comparing the results to the original pure structure results.

testInit()[source]

Test that the Calculator object is initialized correctly.

test_CalculatorPrint()[source]

Test that the Calculator.__str__() method returns the correctly formatted string after being initialized but before predictions.

test_RunModels_Errors()[source]

Test that the runModels() and runModels_dilute() methods raise errors correctly when it is called with no models to run or with a descriptor handling that has not been implemented.

test_WriteDescriptorDataToCSV()[source]

Test that the writeDescriptorsToCSV() method writes the correct data to a CSV file and that the file is consistent with the reference output. It does that with both anonymous structures it enumerates and labeled structures based on the c.inputFileNames list.

test_WriteDescriptorDataToNPY()[source]

Test that the writeDescriptorsToNPY() method writes the correct data to a NPY file and that the file is consistent with the reference output. It does that with both anonymous structures it enumerates and labeled structures based on the c.inputFileNames list.

test_descriptorCalculate_KS2022_dilute_parallel()[source]

Test succesful execution of the descriptorCalculate() method with KS2022_dilute in parallel based on an Al prototype loaded from the default prototype library. A separate test for calculation accuracy is done in test_KS2022.py

test_descriptorCalculate_KS2022_dilute_serial()[source]

Test succesful execution of the descriptorCalculate() method with KS2022_dilute in series based on an Al prototype loaded from the default prototype library. A separate test for calculation accuracy is done in test_KS2022.py

test_descriptorCalculate_KS2022_parallel()[source]

Test succesful execution of the descriptorCalculate() method with KS2022 in parallel. A separate test for calculation accuracy is done in test_KS2022.py.

test_descriptorCalculate_KS2022_serial()[source]

Test succesful execution of the descriptorCalculate() method with KS2022 in series. A separate test for calculation accuracy is done in test_KS2022.py.

test_descriptorCalculate_Ward2017_parallel()[source]

Test succesful execution of the descriptorCalculate() method with Ward2017 in parallel. A separate test for calculation accuracy is done in test_Ward2017.py.

test_descriptorCalculate_Ward2017_serial()[source]

Test succesful execution of the descriptorCalculate() method with Ward2017 in series. A separate test for calculation accuracy is done in test_Ward2017.py.

test_util_Ward2017toKS2022()[source]

Tests that Ward2017 conversion to its KS2022 subset works as intended.

class TestCoreRSS(methodName='runTest')[source]

Bases: TestCase

Test the high-level API functionality of the Calculator object in regard to random solution structures (RSS). It does not test the accuracy, just all runtime modes and known physicality of the results (e.g., FCC should have coordination number of 12).

Note

The execution of the descriptorCalculate() method with KS2022_randomSolution is done under coarse settings (for speed reasons) and should not be used for any accuracy tests. A separate testing for calculation accuracy against consistency and reference values is done in test_KS2022_randomSolutions.py.

setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_descriptorCalculate_KS2022_randomSolution_parallel_multiple()[source]

Test successful execution of manu composition-structure pairs given in ordered lists of input.

test_descriptorCalculate_KS2022_randomSolution_parallel_pair()[source]

Test successful execution of a composition-structure pair in parallel mode. Just for the input passing validation.

test_descriptorCalculate_KS2022_randomSolution_serial_multiple()[source]

Test successful execution (in series) of multiple compositions occupying the same FCC lattice.

test_descriptorCalculate_KS2022_randomSolution_serial_pair()[source]

Test successful execution of a composition-structure pair in series

KS2022 Featurization Correctness

class TestKS2022(methodName='runTest')[source]

Bases: TestCase

Tests the correctness of the KS2022 descriptor generation function by comparing the results to the reference data for the first 25 structures in the exampleInputFiles directory, stored in the exampleInputFilesDescriptorTable.csv. That file that is also used to test the correctness of the Ward2017, which is a superset of the KS2022.

setUp()[source]

Reads the reference data from the exampleInputFilesDescriptorTable.csv file and the labels from the first row of that file. Then it reads the first 25 structures from the exampleInputFiles directory and generates the descriptors for them. The results are stored in the functionOutput list. It defines the emptyLabelsIndx list that contains the indices of the labels that are not used in the KS2022 (vs Ward2017) descriptor generation. It also persists the test results in the KS2022_TestResult.csv file.

test_cite()[source]

Tests citation return.

test_resutls()[source]

Compares the results of the KS2022 descriptor generation function to the reference data on a field-by-field basis by calculating the relative difference between the two and requiring it to be less than 1% for all fields except 0-valued fields, where the absolute difference is required to be less than 1e-6.

class TestKS2022Profiling(methodName='runTest')[source]

Bases: TestCase

Test the KS2022 descriptor generation by profiling the execution time of the descriptor generation function for two example structures (JVASP-10001 and diluteNiAlloy).

test_parallel()[source]

Test the parallel execution of the descriptor generation function 24 times for each of the two examples but in parallel with up to 8 workers to speed up the execution.

test_serial()[source]

Test the serial execution of the descriptor generation function 4 times for each of the two examples.

KS2022 Dilute-Optimized Featurization Correctness

class TestKS2022(methodName='runTest')[source]

Bases: TestCase

Test the KS2022 descriptor calculator optimized for dilute systems.

setUp()[source]

Import the lables expected for the KS2022 dilute descriptor (same as KS2022) and initialize 4 test materials (mp-13, mp-27, mp-165, mp-1211280) to be used in the tests. The 4 test cases should be sufficient to test the dilute descriptor as general KS2022 is tested more extensively, and problems should propagate to the dilute featurizer. To create the dilute structures, 2x2x2 supercells of the test materials are created and the atom at site 0 is replaced with aluminum. Results for the first test case, comparing general KS2022, explicit base, and implicit (pure) base, are persisted in the KS2022_dilute_TestReslt.csv

test_cite()[source]

Tests citation return.

test_resutls_assumePure()[source]

Compare the KS2022_dilute featurizer results with general KS2022 assuming the base structure is pure.

test_resutls_explicitBase()[source]

Compare the KS2022_dilute featurizer results with general KS2022 using explicit base structures, i.e. structures from before the dilute element was added. Calculates the relative difference between the two and requires it to be less than 1% for all fields except 0-valued fields, where the absolute difference is required to be less than 1e-6.

class TestKS2022_diluteProfiling(methodName='runTest')[source]

Bases: TestCase

Test the dilute version of KS2022 descriptor generation by profiling the execution time of the descriptor generation function for one example dilute structure.

test_parallel()[source]

Test the parallel execution of the descriptor generation function 64 times but in parallel with up to 8 workers to speed up the execution.

test_serial()[source]

Test the serial execution of the descriptor generation function 10 times.

KS2022 Random Solution Featurization Correctness

Ward2017 Featurization Correctness

class TestWard2017(methodName='runTest')[source]

Bases: TestCase

Tests the correctness of the KS2022 descriptor generation function by comparing the results to the reference data for the first 5 structures in the exampleInputFiles directory, stored in the exampleInputFilesDescriptorTable.csv.

setUp()[source]

Reads the reference data from the exampleInputFilesDescriptorTable.csv file and the labels from the first row of that file. Then it reads the first 5 structures from the exampleInputFiles directory and generates the descriptors for them. The results are stored in the functionOutput list. It also persists the test results in the Ward2017_TestResult.csv file.

test_cite()[source]

Tests citation return.

test_resutls()[source]

Compares the results of the Ward2017 descriptor generation function to the reference data on a field-by-field basis by requiring the absolute difference to be less than 1e-6.

class TestWard2017Profiling(methodName='runTest')[source]

Bases: TestCase

Test the Ward2017 descriptor generation by profiling the execution time of the descriptor generation function for two example structures (JVASP-10001 and diluteNiAlloy).

test_parallel()[source]

Test the parallel execution of the descriptor generation function 24 times for each of the two examples but in parallel with up to 8 workers to speed up the execution.

test_serial()[source]

Test the serial execution of the descriptor generation function 4 times for each of the two examples.

Auto Runtime of All ONNX Models with Ward2017

class TestAllCompatibleONNX_Ward2017(methodName='runTest')[source]

Bases: TestCase

_Requires the models to be downloaded first._ It then tests the runtime of the pySIPFENN on all POSCAR files in the exampleInputFiles directory and persistence of the results in a CSV file.

test_runtime()[source]

Runs the test.

Accuracy of NN9 20 24 Predictions Against Reference

class TestKrajewski2020ModelsFromONNX(methodName='runTest')[source]

Bases: TestCase

_Requires the NN9/20/24 models to be downloaded first._ It takes the 0-Cr8Fe18Ni4.POSCAR file from the exampleInputFiles directory and calculates the energy with the NN9/20/24 models. The results are then compared to the reference results obtained by authors using pySIPFENN (MxNet->ONNX->PyTorch) and SIPFENN (directly in MxNet) to the 6th decimal place (0.001 meV/atom).

test_resutls()[source]

Runs the test.

Model Exporters Runtime

class TestExporters(methodName='runTest')[source]

Bases: TestCase

Test all model exporting features that can operate on the Calculator object. Note that this will require the models to be downloaded and the environment variable MODELS_FETCHED to be set to true if running in GitHub Actions.

setUp()[source]

Initialise the Calculator object for testing.

testCoreMLExport()[source]

Test that the CoreML export works with all models with no errors. Please note that if you are using custom descriptors, you will need to add them to the exporter definition in pysipfenn/core/modelExporters.py.

testExceptions1()[source]

Test that the exceptions are raised correctly by the exporters when the Calculator is empty. Regardless of the model presence, it will skip the automatic loading of models to pretend it is a fresh install.

testExceptions2()[source]

Test that the exceptions are raised correctly by the exporters when the models are loaded, but the descriptor they are trying to use is not defined in the exporter.

testInit()[source]

Test that the Calculator object is initialised correctly.

testModelsLoaded()[source]

Test that the models are loaded correctly.

testONNXExport()[source]

Test that the ONNX export works with all models with no errors. For two of the models, the export will also simplify or convert to FP16 to check that it gets correctly encoded in the exported file name.

testONNXFP16()[source]

Test that the ONNX FP16 conversion works with all models with no errors.

testONNXSimplify()[source]

Test that the ONNX simplification works with all models with no errors.

testTorchExport()[source]

Test that the PyTorch export works with all models with no errors. Please note that if you are using custom descriptors, you will need to add them to the exporter definition in pysipfenn/core/modelExporters.py.

Defining Custom Models

class TestCustomModel(methodName='runTest')[source]

Bases: TestCase

_Requires the models to be downloaded first._ Test loading a custom model by copying the Krajewski2020_NN24 model to the current directory and loading it from there instead of the default location.

setUp()[source]

Copies the model to CWD.

Return type:

None

tearDown()[source]

Deletes the copied model.

Return type:

None

testCalculation()[source]

Loads the model as custom, runs the calculation and compares the results to the original model results field by field.

Automatic Model Tuning with OPTIMADE API

class TestModelAdjusters(methodName='runTest')[source]

Bases: TestCase

Test all model adjusting features that can operate on the Calculator object. Note that this will require the models to be downloaded and the environment variable MODELS_FETCHED to be set to true if running in GitHub Actions.

The setup will load the Krajewski2022_NN30 model and create an OPTIMADEAdjuster object for testing that is by default connected to the Materials Project OPTIMADE server and looks for their GGA+U formation energies. In the testFullRoutine test, the adjuster will be used to adjust the model to the Hf-Mo metallic system. The test will cover almost all adjuster functionalities in different ways to hit all anticipated code paths. It also tests the LocalAdjuster class for loading data from CSV and NPY files, which is a parent class of the OPTIMADEAdjuster.

pytestmark = [Mark(name='skipif', args=(False,), kwargs={'reason': 'Test depends on the ONNX network files'})][source]
setUp()[source]

Initialises the Calculator and ModelAdjuster objects for testing.

testDataLoading()[source]

Test the data loading functionality of the LocalAdjuster class (note, OPTIMADEAdjuster extends it). It will test loading from both CSV and NPY files exported from the Calculator object. Note that CSV files have names in the first column and headers in the first row, while NPY files are just the data arrays. It tests implicit loading from the Calculator object as well. Lastly, it tests the error raising for unsupported descriptors and data not matching the descriptor dimensions selected (an optional feature).

testEndpointOverride()[source]

Test the endpoint override functionality of the OPTIMADEAdjuster class. It will test the override of the endpoint and the data fetching from the new endpoint.

testFullRoutine()[source]

Test the full routine of the adjuster based on the default values pointing to Materials Project. Get the data using OPTIMADE to adjust the model to Hf-Mo metallic system. Matrix search is reduced to 4 cases to speed up the test and it is designed to explore all code paths in the search. The test will also check the highlighting and plotting functionalities of the adjuster.

testInit()[source]

Test that the OPTIMADEAdjuster object has been initialized correctly.

testPlotExceptions()[source]

Test that the plot does not plot anything when no data is present.