Quickstart

In this resource, you'll learn the basics to get started using Giza products!

Before you begin, make sure you have all the necessary libraries installed.

Create and Login a User

From your terminal, create a Giza user through our CLI in order to access the Giza Platform:

giza users create

After creating your user, log into Giza:

giza users login

Optional: you can create an API Key for your user in order to not regenerate your access token every few hours.

giza users create-api-key
Transpile a Model and Deploy and Inference Endpoint

This step is only necessary if you have not yet deployed an verifiable inference endpoint.

Transpilation

Transpilation is a crucial process in the deployment of Verifiable Machine Learning models. It involves the transformation of a model into a Cairo model. These models can generate ZK proofs.

The transpilation process starts by reading the model from the specified path. The model is then sent for transpilation.

giza transpile awesome_model.onnx --output-path my_awesome_model
[giza][2024-02-07 16:31:20.844] No model id provided, checking if model exists ✅
[giza][2024-02-07 16:31:20.845] Model name is: awesome_model
[giza][2024-02-07 16:31:21.599] Model Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.436] Version Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.437] Sending model for transpilation ✅
[giza][2024-02-07 16:32:13.511] Transpilation is fully compatible. Version compiled and Sierra is saved at Giza ✅
[giza][2024-02-07 16:32:13.516] Transpilation recieved! ✅
[giza][2024-02-07 16:32:14.349] Transpilation saved at: my_awesome_model

During transposition, an instance of a model and a version were created on the Giza platform.

For more information about transpilation, please check the Transpile resource.

Deploy an Inference Endpoint

To create a new service, users can employ the deploy command. This command facilitates the deployment of a verifiable machine learning service ready to accept predictions at the /cairo_run endpoint, providing a straightforward method for deploying and using machine learning capabilities that can easily be consumed as and API endpoint.

> giza endpoints deploy --model-id 1 --version-id 1
▰▰▰▰▰▱▱ Creating endpoint!
[giza][2024-02-07 12:31:02.498] Endpoint is successful ✅
[giza][2024-02-07 12:31:02.501] Endpoint created with id -> 1 ✅
[giza][2024-02-07 12:31:02.502] Endpoint created with endpoint URL: https://deployment-gizabrain-38-1-53427f44-dagsgas-ew.a.run.app 🎉
Create your first Agent

To create a new agent an inference endpoint should have been deployed prior. If you don't have one, please follow the previous section.

Step 1: Create an Account

The first step will be to create an account (wallet) using Ape's framework this will be the account that we will use to sign the transactions of the smart contract. We can do this by running the following command and providing the required data.

$ ape accounts generate <account name>
Enhance the security of your account by adding additional random input:
Show mnemonic? [Y/n]: n
Create Passphrase to encrypt account:
Repeat for confirmation:
SUCCESS: A new account '0x766867bB2E3E1A6E6245F4930b47E9aF54cEba0C' with HDPath m/44'/60'/0'/0/0 has been added with the id '<account name>'

It will ask you for a passphrase, make sure to save it in a safe place as it will be used to unlock the account when signing.

We encourage the creation of a new account for each agent, as it will allow you to manage the agent's permissions and access control more effectively, but importing accounts is also possible.

Step 2: Fund the Account

Before we can create an AI Agent, we need to fund the account with some ETH. You can do this by sending some ETH to the account address generated in the previous step.

If you are using Sepolia testnet, you can get some testnet ETH from a faucet like Alchemy Sepolia Faucet or LearnWeb3 Faucet. If you are on mainnet you will need to transfer funds to it.

Step 3: Create and Agent using the CLI

With a funded account we can create an AI Agent. We can do this by running the following command

giza agents create --model-id <model-id> --version-id <version-id> --name <agent name> --description <agent description>

The information needed is:

  • model-id: the id of the model used in the endpoint.

  • version-id: the id of the version used in the endpoint.

  • name: name for the agent.

  • description: an optional description of the agent.

This command will prompt you to use an ape account, specify the one that you want to use and it will create the agent.

Step 4: Setup your Agent with the SDK

Now you are set to use an Agent through code and interact with smart contracts.

⚠️ A environment variable of <ACCOUNT_NAME>_PASSPHRASE must be in the environment. This is needed to decrypt the account and sign transactions.

from giza.agents import GizaAgent

agent = GizaAgent(
    id=<model-id>,
    version_id=<version-id>,
    contracts={
        # Contracts uses <alias>:<contract-address> dict
        "mnist": "0x17807a00bE76716B91d5ba1232dd1647c4414912",
        "token": "0xeF7cCAE97ea69F5CdC89e496b0eDa2687C95D93B",
        },
    # Specify the chain that you are using
    chain="ethereum:sepolia:geth",
)

result = agent.predict(input_feed=[42], verfiable=True)

# This is only using read functions, so no need to sign
with agent.execute() as contracts:
    # Accessing the value will block until the prediction is verified
    if result.value > 0.5:
        # Read the contract name
        result = contracts.mnist.name()
        print(result)
    elif result.value < 0.5:
        # Read the contract name
        result = contracts.token.name()
        print(result)
Use Datasets SDK

This section is mainly intended for developers who are already accustomed to fundamentals of Python, as well as its common ML libraries and frameworks.

  1. Import giza-datasets

from giza.datasets import DatasetsHub, DatasetsLoader

Additionally, it might be required to run the following lines. See DatasetsLoader.

import os
import certifi

os.environ['SSL_CERT_FILE'] = certifi.where()

  1. Query the datasets using a DatasetsHub object

hub = DatasetsHub()

With the DatasetsHub() object, we can know query the DatasetsHub to find the perfect dataset for our ML model. See DatasetsHub for further instructions. Alternatively, you can check DatasetsHub pages to explore the available datasets from your browser.

Lets use the list_tags() function to list all the tags and then get_by_tag() to query all the datasets with the "Yearn-v2" tag.

print(hub.list_tags())

[ 'Trade Volume', 'DeFi', 'Yearn-v2','Interest Rates','compound-v2',....]

Yearn-v2 looks interesting, lets search all the datasets that have the 'Yearn-v2' tag.

datasets = hub.get_by_tag('Yearn-v2')

for dataset in datasets:
    hub.describe(dataset.name)
                        Details for yearn-individual-deposits                        
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Attribute     ┃ Value                                                             ┃
┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ Path          │ gs://datasets-giza/Yearn/Yearn_Individual_Deposits.parquet        │
├───────────────┼───────────────────────────────────────────────────────────────────┤
│ Description   │ Individual Yearn Vault deposits                                   │
├───────────────┼───────────────────────────────────────────────────────────────────┤
│ Tags          │ DeFi, Yield, Yearn-v2, Ethereum, Deposits                         │
├───────────────┼───────────────────────────────────────────────────────────────────┤
│ Documentation │ https://datasets.gizatech.xyz/hub/yearn/individual-vault-deposits │
└───────────────┴───────────────────────────────────────────────────────────────────┘

yearn-individual-deposits looks great!

  1. Load a dataset using DatasetLoader

loader = DatasetsLoader()

Having instantiated the DatasetsLoader(), all we need to do is load the dataset using the name we have queried using DatasetsHub().

df = loader.load('yearn-individual-deposits')

df.head()

shape: (5, 7)

Keep in mind that giza-datasets uses Polars (and not Pandas) as the underlying DataFrame library.


Perfect, the Dataset is loaded correctly and ready to go! Now we can use our preferred ML Framework and start building.

Last updated