After installation of the IAA in your Snowflake Account. It's time to analyze your workload.
You'll find the landing page of IAA.
You can get the SMA in the following link. if you need more information on how to run the SMA, find the details in their documentation.
- Run Assessment in your code.
- Click on View Results
-
Navigate to the output folder to get the Zip File AssessmentFiles_*.zip
-
Go to your Snowflake account, the one for the deployment, and navigate to the SMA_EXECUTIONS stage. By clicking
Data > Databases > [Name of the Database of deployment] > Stages > SMA_EXECUTIONS
- Upload your AssessmentFiles.zip* into the stage.
- Navigate to your Interactive Assessment Application
- Click Reload Interactive Assessment Application
- This data will take about 30 seconds to load, you can reload the page.
In left side menu you will find the different aspect of the execution to explore.
- Execution: This is a brief summary of the selected executions. with a few key metrics like Spark API, Third party API, and SQL readiness score. Here you can select a single or multiple executions.
- Inventories: This section provides exportable raw files (xlsx format) about topics from notebooks to IO file operations.
-
Assessment Report: Download the SMA report.
-
Code Tree Map: It shows the distribution of files in a graphical tree map. The size of the squares represent the size of the code files.
- Readiness by File: It shows the breadown by file and the readiness for migration.
-
Reader Writers: Allow you to understand which files in your project are reading or writing.
-
Dependencies: This section provides information related with your project depedencies, it includes internal dependencies and external one.
This section of the App provides insights of the current capabilities of SMA for different kind of APIs. This is not specific to your execution.
This will give you information of the current support for libraries including programming language Built-ins or third parties. In the category dropdown you can filter by the kind of library.
The Spark libraries refers specifically those for Scala/Java, and their equivalent in Snowpark. In the case of Pyspark Libraries they are exclusive for Python.
In the image below we see 4 different columns.
- Category: Is the name the different groups
- Spark/PySpark Fully Qualified Name: This is the full name of function, class, method for the Spark API.
- Snowpark Fully Qualified Name: This would be the equivalent function, class or method for Snowpark.
- Mapping Status: This shows how the SMA will treat each of the Spark/Pandas for more details on meaning of the status please check SMA documentation.