OCI Generative AI On-Demand Models – From Setup to Chat App
Generative AI is transforming how organizations build intelligent applications, from interactive assistants to automated knowledge systems. Oracle Cloud Infrastructure (OCI) makes this power accessible through its Generative AI On-Demand Models, including options like Cohere’s Command R+ and Meta’s Llama 3.3.
On-Demand Model are economical:
In this guide, we’ll take you through the complete journey — starting with configuring access and locating the right model OCID, and ending with a fully functional chat application built using the OCI Python SDK and Streamlit. By the end, you’ll know exactly how to move from setup to implementation and bring Generative AI into your own applications.
Getting Started
Before writing any code, you must configure OCI credentials to allow your application to call Generative AI services.
1. Generate an API Key
-
Log in to the OCI Console.
-
Click your profile icon → User Settings.
-
Under Resources, select API Keys.
-
Click Add API Key and either:
-
Generate a new key pair in OCI (download the private key
.pem), or -
Upload your own public key (if you already created one with
openssl).
-
After adding the key, OCI will show you:
-
User OCID
-
Tenancy OCID
-
Fingerprint
tenancy
region
Copy these values.
2. Save the Private Key
If OCI generated the key, download the .pem file and place it under ~/.oci/oci_api_key.pem.
Restrict access:

Comments
Post a Comment