Giza provides two methods for proving Orion Cairo programs: through the CLI or directly after running inference on the Giza Platform. Below are detailed instructions for both methods.
Option 1: Prove a Model After Running Inference
Deploying Your Model
After deploying an endpoint of your model on the Giza Platform, you will receive a URL for your deployed model. Refer to the Endpoints section for more details on deploying endpoints.
Running Inference
To run inference, use the /cairo_run endpoint of your deployed model's URL. For example:
This action will execute the inference, generate Trace and Memory files on the platform, and initiate a proving job. The inference process will return the output result along with a request ID.
Checking Proof Status
To check the status of your proof, use the following command:
Alternatively, you can prove a model directly using the CLI without deploying the model for inference. This method requires providing Trace and Memory files, which can only be obtained by running CairoVM in proof mode.
Running the Prove Command
Execute the following command to prove your model:
giza prove --trace <TRACE_PATH> --memory <MEMORY_PATH> --output-path <OUTPUT_PATH>
This option is not recommended because of the need to deal with CairoVM. If you opt for this method, ensure you use the following commit of CairoVM: 1a78237.
Job Size
When generating a proof we can choose the size of the underlying job: