In this tutorial you will learn how to use the Giza stack though a Linear Regression model.
Installation
To follow this tutorial, you must first proceed with the following installation.
Handling Python versions with Pyenv
You should install Giza tools in a virtual environment. If you’re unfamiliar with Python virtual environments, take a look at this guide. A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies.
Install Python 3.11 using pyenv
pyenvinstall3.11.0
Set Python 3.11 as local Python version:
pyenvlocal3.11.0
Create a virtual environment using Python 3.11:
pyenvvirtualenv3.11.0my-env
Activate the virtual environment:
pyenvactivatemy-env
Now, your terminal session will use Python 3.11 for this project.
From your terminal, create a Giza user through our CLI in order to access the Giza Platform:
gizauserscreate
After creating your user, log into Giza:
gizauserslogin
Optional: you can create an API Key for your user in order to not regenerate your access token every few hours.
gizauserscreate-api-key
Create and Train a Linear Regression Model
We'll start by creating a simple linear regression model using Scikit-Learn and train it with some dummy data.
import numpy as npfrom sklearn.linear_model import LinearRegressionfrom sklearn.model_selection import train_test_split# Generate some dummy dataX = np.random.rand(100, 1)*10# 100 samples, 1 featurey =2* X +1+ np.random.randn(100, 1)*2# y = 2x + 1 + noise# Split the data into training and testing setsX_train, X_test, y_train, y_test =train_test_split(X, y, test_size=0.2, random_state=42)# Create a linear regression modelmodel =LinearRegression()# Train the modelmodel.fit(X_train, y_train)
Convert the Model to ONNX Format
Giza supports ONNX models so you'll need to convert the model to ONNX format. After the model is trained, you can convert it to ONNX format using the skl2onnx library.
from skl2onnx import convert_sklearnfrom skl2onnx.common.data_types import FloatTensorType# Define the initial types for the ONNX modelinitial_type = [('float_input',FloatTensorType([None, X_train.shape[1]]))]# Convert the scikit-learn model to ONNXonnx_model =convert_sklearn(model, initial_types=initial_type)# Save the ONNX model to a filewithopen("linear_regression.onnx", "wb")as f: f.write(onnx_model.SerializeToString())
Transpile your model to Orion Cairo
For more detailed information on transpilation, please consult the Transpiler resource.
We will use Giza-CLI to transpile our ONNX model to Orion Cairo.
$gizatranspilelinear_regression.onnx--output-pathverifiable_lr>>>>[giza][2024-03-19 10:43:11.351] No model id provided, checking ifmodelexists✅[giza][2024-03-19 10:43:11.354] Model name is: linear_regression[giza][2024-03-19 10:43:11.586] Model Created with id -> 447!✅[giza][2024-03-19 10:43:12.093] Version Created with id -> 1!✅[giza][2024-03-19 10:43:12.094] Sending model for transpilation ✅ [giza][2024-03-19 10:43:43.185] Transpilation is fully compatible. Version compiled and Sierra is saved at Giza ✅⠧TranspilingModel...[giza][2024-03-19 10:43:43.723] Downloading model ✅[giza][2024-03-19 10:43:43.731] model saved at: verifiable_lr
Deploy an inference endpoint
For more detailed information on inference endpoint, please consult the Endpoint resource.
Now that our model is transpiled to Cairo we can deploy an endpoint to run verifiable inferences. We will use Giza CLI again to deploy an endpoint. Ensure to replace model-id and version-id with your ids provided during transpilation.
$gizaendpointsdeploy--model-id447--version-id1▰▱▱▱▱▱▱Creatingendpoint![giza][2024-03-19 10:51:48.551] Endpointissuccessful✅[giza][2024-03-19 10:51:48.557] Endpoint created with id -> 109 ✅[giza][2024-03-19 10:51:48.558] Endpoint created with endpoint URL: https://endpoint-raphael-doukhan-447-1-a09e4e6f-7i3yxzspbq-ew.a.run.app 🎉
Run a verifiable inference
To streamline verifiable inference, you might consider using the endpoint URL obtained after transpilation. However, this approach requires manual serialization of the input for the Cairo program and handling the deserialization process. To make this process more user-friendly and keep you within a Python environment, we've introduced a Python SDK designed to facilitate the creation of ML workflows and execution of verifiable predictions. When you initiate a prediction, our system automatically retrieves the endpoint URL you deployed earlier, converts your input into Cairo-compatible format, executes the prediction, and then converts the output back into a numpy object.
from giza.agents.model import GizaModelMODEL_ID =447# Update with your model IDVERSION_ID =1# Update with your version IDdefprediction(input,model_id,version_id): model =GizaModel(id=model_id, version=version_id) (result, proof_id) = model.predict( input_feed={'input': input}, verifiable=True )return result, proof_iddefexecution():# The input data type should match the model's expected inputinput= np.array([[5.5]]).astype(np.float32) (result, proof_id) =prediction(input, MODEL_ID, VERSION_ID)print( f"Predicted value for input {input.flatten()[0]} is {result[0].flatten()[0]}")return result, proof_idexecution()
11:34:04.423 | INFO | Created flow run 'proud-perch' for flow 'ExectuteCairoLR'
11:34:04.424 | INFO | Action run 'proud-perch' - View at https://actions-server-raphael-doukhan-dblzzhtf5q-ew.a.run.app/flow-runs/flow-run/637bd0e0-d7e8-4d89-8c07-a266e6c280ce
11:34:04.746 | INFO | Action run 'proud-perch' - Created task run 'PredictLRModel-0' for task 'PredictLRModel'
11:34:04.748 | INFO | Action run 'proud-perch' - Executing 'PredictLRModel-0' immediately...
🚀 Starting deserialization process...
✅ Deserialization completed! 🎉
11:34:08.194 | INFO | Task run 'PredictLRModel-0' - Finished in state Completed()
11:34:08.197 | INFO | Action run 'proud-perch' - Predicted value for input 5.5 is 12.208511352539062
11:34:08.313 | INFO | Action run 'proud-perch' - Finished in state Completed()
(array([[12.20851135]]), '"3a15bca06d1f4788b36c1c54fa71ba07"')
Download the proof
For more detailed information on proving, please consult the Prove resource.
Initiating a verifiable inference sets off a proving job on our server, sparing you the complexities of installing and configuring the prover yourself. Upon completion, you can download your proof.
First, let's check the status of the proving job to ensure that it has been completed.
Remember to substitute endpoint-id and proof-id with the specific IDs assigned to you throughout this tutorial.
$gizaendpointsget-proof--endpoint-id109--proof-id"3a15bca06d1f4788b36c1c54fa71ba07">>>[giza][2024-03-19 11:51:45.470] Getting proof from endpoint 109 ✅ {"id":664,"job_id":831,"metrics":{"proving_time":15.083126 },"created_date":"2024-03-19T10:41:11.120310"}
Once the proof is ready, you can download it.
$ giza endpoints download-proof --endpoint-id 109 --proof-id "3a15bca06d1f4788b36c1c54fa71ba07" --output-path zklr.proof
>>>>[giza][2024-03-19 11:55:49.713] Getting proof from endpoint 109 ✅ [giza][2024-03-19 11:55:50.493] Proof downloaded to zklr.proof ✅
Better to surround the proof-id in double quotes (") when using the alphanumerical id