# Verifiable Linear Regression

In this tutorial you will learn how to use the Giza stack though a Linear Regression model.

## Installation

To follow this tutorial, you must first proceed with the following installation.

## Setup

From your terminal, create a Giza user through our CLI in order to access the Giza Platform:

After creating your user, log into Giza:

*Optional*: you can create an API Key for your user in order to not regenerate your access token every few hours.

## Create and Train a Linear Regression Model

We'll start by creating a simple linear regression model using Scikit-Learn and train it with some dummy data.

## Convert the Model to ONNX Format

Giza supports ONNX models so you'll need to convert the model to ONNX format. After the model is trained, you can convert it to ONNX format using the skl2onnx library.

## Transpile your model to Orion Cairo

For more detailed information on transpilation, please consult the Transpiler resource.

We will use Giza-CLI to transpile our ONNX model to Orion Cairo.

## Deploy an inference endpoint

For more detailed information on inference endpoint, please consult the Endpoint resource.

Now that our model is transpiled to Cairo we can deploy an endpoint to run verifiable inferences. We will use Giza CLI again to deploy an endpoint. Ensure to replace `model-id`

and `version-id`

with your ids provided during transpilation.

## Run a verifiable inference

To streamline verifiable inference, you might consider using the endpoint URL obtained after transpilation. However, this approach requires manual serialization of the input for the Cairo program and handling the deserialization process. To make this process more user-friendly and keep you within a Python environment, we've introduced a Python SDK designed to facilitate the creation of ML workflows and execution of verifiable predictions. When you initiate a prediction, our system automatically retrieves the endpoint URL you deployed earlier, converts your input into Cairo-compatible format, executes the prediction, and then converts the output back into a numpy object.

## Download the proof

For more detailed information on proving, please consult the Prove resource.

Initiating a verifiable inference sets off a proving job on our server, sparing you the complexities of installing and configuring the prover yourself. Upon completion, you can download your proof.

First, let's check the status of the proving job to ensure that it has been completed.

Remember to substitute `endpoint-id`

and `proof-id`

with the specific IDs assigned to you throughout this tutorial.

Once the proof is ready, you can download it.

Better to surround the proof-id in double quotes (") when using the alphanumerical id

## Verify the proof

Finally you can verify the proof.

Last updated