Introduction

Xenova is a high-performance AI inference engine built natively for Solana. It enables developers to run complex machine learning models on-chain with unprecedented speed and efficiency.

By leveraging Solana's parallel processing capabilities and custom runtime optimizations, Xenova achieves sub-millisecond inference times for most models, making real-time AI applications practical on blockchain.

<1ms
Fast
99.9%
Efficient
100%
Secure

Quick Start

Get started with Xenova in under 5 minutes. Follow these simple steps to run your first AI model on Solana.

1Connect Your Wallet

Start by connecting your Solana wallet. Xenova supports all major wallets including Phantom, Solflare, and Backpack.

2Install the SDK

Install the Xenova SDK using your preferred package manager.

npm install @xenova/core
# or
yarn add @xenova/core
# or
pnpm add @xenova/core

3Run Your First Model

Initialize Xenova and execute your first inference.

import { Xenova } from '@xenova/core'

const xenova = new Xenova({
  apiKey: process.env.XENOVA_API_KEY,
  network: 'mainnet-beta'
})

const result = await xenova.execute({
  model: 'transformer-v1',
  input: 'Your data here'
})

Installation

Requirements

  • Node.js 18+ or Bun
  • Solana wallet with SOL for transactions
  • Xenova API key (get from dashboard)

Package Installation

npm install @xenova/core
# or
yarn add @xenova/core
# or
pnpm add @xenova/core

Architecture

Xenova's architecture is designed for maximum performance and scalability on Solana.

Parallel Execution

Xenova leverages Solana's Sealevel runtime to execute multiple inference requests in parallel, dramatically improving throughput.

Model Optimization

Models are automatically optimized using custom WASM compilation and quantization techniques, reducing size by up to 10x without accuracy loss.

Caching Layer

Xenova leverages Solana's account system to cache frequently used models and intermediate results, reducing latency for repeated requests.

API Reference

Xenova.execute()

Execute an AI model inference with the provided input.

import { Xenova } from '@xenova/core'

const xenova = new Xenova({
  apiKey: process.env.XENOVA_API_KEY,
  network: 'mainnet-beta'
})

const result = await xenova.execute({
  model: 'transformer-v1',
  input: 'Your data here'
})
Pipeline.execute()

Execute a multi-stage inference pipeline with automatic optimization.

import { Xenova, Pipeline } from '@xenova/core'

const pipeline = new Pipeline([
  { stage: 'tokenize', model: 'tokenizer-v2' },
  { stage: 'transform', model: 'transformer-v1' },
  { stage: 'optimize', parallel: true }
])

const result = await pipeline.execute(data)

Security

Xenova is built with security as a top priority. All code is open-source and audited.

Audited Code

All smart contracts audited by top security firms

Open Source

Full transparency with public GitHub repositories

Deployment

Deploy your Xenova-powered application to production with confidence.

Production Checklist

  • Configure production RPC endpoints
  • Set up monitoring and alerts
  • Enable rate limiting and caching
  • Test with mainnet-beta tokens
  • Review security best practices