Skip to content

Guide for Developers

GspikeHalo edited this page Apr 9, 2025 · 94 revisions

0. Development Environment Setup

Please follow the Getting Started Guide to install the required packages and obtain the codebase.

1. Setup Backend Development.

Setup PostgreSQL locally

Texera uses PostgreSQL to manage the user data and system metadata. To install and configure it, please follow the below instructions:

  1. Install Postgres@14+, if you are using Mac, do a simple brew install postgres.

  2. Install Pgroonga for enabling full-text search, if you are using Mac, do a simple brew install pgroonga.

  3. Create texera_db in Postgres with core/scripts/sql/texera_ddl.sql to create the database for storing user data.

  4. Create texera_iceberg_catalog in Postgres with core/scripts/sql/iceberg_postgres_catalog.sql to create the database for storing Iceberg catalogs.

  5. Edit core/workflow-core/src/main/resources/storage.conf, change icberg.catalog.type to postgres.

Setup the LakeFS+Minio locally

Texera requires LakeFS and S3(Minio is one of the implementations) as the dataset storage. Setting up these two storage services locally are required to make Texera's backend fully functioning. You may also refer to this PR to see how we introduce them as the underlying storage and the architecture.

Here are two ways of setting up LakeFS+Minio:

1. Use Docker (Highly recommended for local development)

  • Install Docker Desktop which contains both docker engine and docker compose. Make sure you launch the Docker after installing it.
  • Go to directory: core/file-service/src/main/resources
  • Configure docker-compose.yml to mount the data to your local folder, you can search for volumes in the file and follow the instructions in the comment. This step is required otherwise your data can lose if containers are deleted
  • Execute docker compose up -d at the directory

2. Use Binary (Recommended for production deployment, not recommended for local development)

Refer to https://docs.lakefs.io/howto/deploy/ for how to deploy LakeFS, https://min.io/docs/minio/kubernetes/upstream/index.html for how to deploy Minio. Once you finish the deployment, you also need to configure the following items in the core/workflow-core/src/main/resources/storage.conf

    # Configurations of the LakeFS & S3 for dataset storage;
    lakefs {
        endpoint = "http://localhost:8000/api/v1"
        endpoint = ${?STORAGE_LAKEFS_ENDPOINT}

        auth {
            api-secret = ""
            api-secret = ${?STORAGE_LAKEFS_AUTH_API_SECRET}

            username = ""
            username = ${?STORAGE_LAKEFS_AUTH_USERNAME}

            password = ""
            password = ${?STORAGE_LAKEFS_AUTH_PASSWORD}
        }

        block-storage {
            type = ""
            type = ${?STORAGE_LAKEFS_BLOCK_STORAGE_TYPE}

            bucket-name = ""
            bucket-name = ${?STORAGE_LAKEFS_BLOCK_STORAGE_BUCKET_NAME}
        }
    }

    s3 {
        endpoint = ""
        endpoint = ${?STORAGE_S3_ENDPOINT}

        region = ""
        region = ${?STORAGE_S3_REGION}

        auth {
            username = ""
            username = ${?STORAGE_S3_AUTH_USERNAME}

            password = ""
            password = ${?STORAGE_S3_AUTH_PASSWORD}
        }
    }

Import the Core project into IntelliJ

Before you import the project, you need to have "Scala", and "SBT Executor" plugins installed in Intellij. Screenshot 2024-12-02 at 5 59 34 PM

  1. In Intellij, open File -> New -> Project From Existing Source, then choose the core folder.
  2. In the next window, select Import Project from external model, then select sbt.
  3. In the next window, make sure Project JDK is set. Click OK.
  4. IntelliJ should import and build this Scala project. In the terminal under core, run:
sbt clean protocGenerate

This will generate proto-specified codes. And the IntelliJ indexing should start. Wait until the indexing and importing is completed. And on the right, you can open the sbt tab and check the loaded core project and couple of sub projects under core:

Screenshot 2025-03-29 at 10 45 55 AM
  1. When IntelliJ prompts "Scalafmt configuration detected in this project" in the bottom right corner, select "Use scalafmt formatted". If you missed the IntelliJ prompt, you can check the Event Log on the bottom right

Your PR should pass scalafix check (lint) and scalafmt check.

  • To check lint, under core run command sbt "scalafixAll --check"; to fix lint issues, run sbt scalafixAll.
  • To check format, under core run command sbt scalafmtCheckAll; to fix format, run sbt scalafmtAll.
  • When you need to execute both, scalafmt is supposed to be executed after scalafix.

Run the backend engine in IntelliJ

The easiest way to run backend services is in IntelliJ. Currently we have couple of micro services for different purposes:

  • TexeraWebApplication: provide user login, community resources read & write, and loading of available operators metadata.
  • FileService: provide dataset-related endpoints, including datasets management, access control and read&write files across datasets.
  • WorkflowCompilingService: propagate the schema and check for static errors during the workflow constructing.
  • ComputingUnitMaster: manage workflow execution, and serve as the master node of the computing cluster.
  • ComputingUnitWorker: a worker node in the computing cluster. (It is not a web server)

To be able to run the workflow using Amber, a distributed engine, we need to run the controller process which is TexeraWebApplication, and a master node of the computing cluster which is ComputingUnitMaster.

To run each of the above webserver, go to the corresponding scala file(i.e. for TexeraWebApplication, go find TexeraWebApplication.scala), then run the main function by pressing on the green run button and wait for the process to start up.

For TexeraWebApplication, the following message indicates that it is successfully running:

[main] [akka.remote.Remoting] Remoting now listens on addresses:
org.eclipse.jetty.server.Server: Started

For ComputingUnitMaster, the following prompt indicates that it is successfully running:

---------Now we have 1 node in the cluster---------

Run the backend engine in the command line (optional)

An alternative to run the backend engine is to run it in the command line. Navigate to core folder in terminal window, run scripts/deploy-daemon.sh, which launches all micro services as background processes; to terminate them, run scripts/terminate-daemon.sh.

Testing the backend

  1. The test framework is scalatest, for the amber engine, tests are located under core/amber/src/test; for WorkflowCompilingService, tests are located under core/workflow-compiling-service. You can find unit tests and e2e tests.
  2. To execute it, navigate to core directory in the command line and execute sbt test.
  3. If using IntelliJ to execute the test cases please make sure to be at the correct working directory.
  • For the amber engine's tests, the working directory should be core/amber
  • For the compiling service's tests, the working directory should be core

2. Develop Frontend

This is for developers that work on the frontend part of the project. This step is NOT needed if you develop the backend only.

Install Angular CLI

Recommend using nodejs@18 LTS and required to use [email protected]. Install yarn:

npm install -g yarn
corepack enable && corepack prepare [email protected] --activate && yarn --cwd core/gui set version 4.5.1

You need to install Angular CLI to build and run the new GUI.

yarn install

Ignore those warnings (warnings are usually marked in yellow color or start with WARN).

Develop Frontend in IntelliJ

  1. In Intellij, open File -> Open, then choose the gui folder inside core.
  2. IntelliJ should import this project. Wait until the indexing and importing is completed.
  3. Click on the Green Run button next to the Angular CLI Server.
  4. Wait for some time and the server will get started. Open a browser and access http://localhost:4200. You should see the Texera UI with a canvas.

Every time you save the changes to the frontend code, the browser will automatically refresh to show the latest UI.

Testing the frontend

Before merging your code to the master branch, you need to pass the existing unit tests first.

  1. Open a command line. Navigate to the core/gui directory.
  2. Start the test:
ng test --watch=false
  1. Wait for some time and the test will get started. You should also write some unit tests to cover your code. When others need to change your code, they will have to pass these unit tests so that you can keep your features safe. The unit tests should be written inside .spec.ts file.

Deploy the frontend to the production environment

Run the following command

yarn run build

This command will optimize the frontend code to make it run faster. This step will take a while. After that, start the backend engine in IntelliJ and use your browser to access http://localhost:8080.

Formatting frontend code

Run the following command

yarn format:fix

This command will fix the formatting of the frontend code.

3. Python UDF

  • Install [email protected], it's recommended to create a virtualenv to get a new copy of Python. Note: if you are using Apple's new M1 chip, please install Python through Anaconda.
  • Obtain Python executable path, for example, run which python or where python, and copy the returned path.
  • Fill the Python executable path into core/amber/src/main/resources/udf.conf, under the path key.
  • Install dependencies: pip install -r core/amber/requirements.txt -r core/amber/operator-requirements.txt -r core/amber/r-requirements.txt.
  • For formatting python files: black core/amber/src/main/python.

4. User Dashboard (Optional)

  1. Make sure you have installed the PostgreSQL and configured your local PostgreSQL instance with the core/scripts/sql/texera_ddl.sql

  2. Edit core/gui/src/environments/environment.default.ts, change userSystemEnabled to true.

  3. Edit core/amber/src/main/resources/application.conf

    1. Change user-sys.enabled to true.
  4. Edit core/workflow-core/src/main/resources/storage.conf

    1. Change jdbc.url, jdbc.username and jdbc.password to your Postgres user and password to access texera_db that you just created.
  5. Optional: Add googleClientId to the same file to enable google login.

  6. Restart frontend and backend. You should see the homepage. You can register or login there.

5. Email Notification (Optional)

  1. Set clientId and clientSecret in core/amber/src/main/resources/application.conf.
  2. Login to Texera with an admin account.
  3. Open the Gmail dashboard under the admin tab.
  4. Authorize a Google account for sending emails.
  5. Send a test email.

6. Misc (Optional)

This part is optional, you only need to do this if you are working on a specific task.

To enable MongoDB storage for sink

  1. Install MongoDB on your development environment(4.4 works for Ubuntu, 5.0 works for Mac/Windows).
  2. Start MongoDB with the default configuration.
  3. Edit core/amber/src/main/resources/application.conf, change storage.mode to "mongodb".
  4. Start Texera, all the results of sink operators will be saved into MongoDB if applicable.

To create a new database table and write queries using Java through Jooq

  1. Create the needed new table in MySQL and update core/scripts/sql/texera_ddl.sql to include the new table.
  2. Run core/dao/src/main/scala/edu/uci/ics/texera/dao/JooqCodeGenerator.scala to generate the classes for the new table.
  3. Create a helper class under core/amber/src/main/scala/edu/uci/ics/texera/web/resource/dashboard.

Note: Jooq creates DAO for simple operations if the requested SQL query is complex, then the developer can use the generated Table classes to implement the operation

Disable password login

Edit core/gui/src/environments/environment.default.ts, change localLogin to false.

Enforce invite only

Edit core/gui/src/environments/environment.default.ts, change inviteOnly to true.

Backend endpoints Role Annotation

There are two types of permissions for the backend endpoints:

  1. @RolesAllowed(Array("Role"))
  2. @PermitAll Please don't leave the permission setting blank. If the permission is missing for an endpoint, it will be @PermitAll by default.
Clone this wiki locally