1. Introduction
1. About the LlamaIndex course
2. Basic Setup
pip install llama-index openai
3. Help & resources
2. Introduction to LlamaIndex and LLM applications
1. Intro to section and LLM applications
2. Train ChatGPT (LLMs) on custom data - RAG
3. The difference between LlamaIndex and LangChain
4. LLMs and data privacy
6. How LlamaIndex works
# %% import
import logging
import sys
import os
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
# %%
import os
os.environ['NUMEXPR_MAX_THREADS'] = '4'
os.environ['NUMEXPR_NUM_THREADS'] = '2'
import numexpr as ne
# %% pip install nltk
from dotenv import load_dotenv
load_dotenv()
import openai
# openai.api_key=''
# %%
from llama_index import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader('assets/AndrewHuberman/sleep').load_data()
index = VectorStoreIndex.from_documents(documents)
# %%
query_engine = index.as_query_engine()
response = query_engine.query('what can i do to sleep better?')
print(response)
7. How to use LLMs with LlamaIndex
from dotenv import load_dotenv
load_dotenv()
import os
os.environ['NUMEXPR_MAX_THREADS'] = '4'
os.environ['NUMEXPR_NUM_THREADS'] = '2'
import numexpr as ne
# %%
from llama_index.llms import OpenAI
llm = OpenAI(temperature=0, model='gpt-4', max_tokens=250)
response = llm.complete('What is AI?')
print(response)
# print(response.raw)
# %%
from llama_index.llms import OpenAI, ChatMessage
llm = OpenAI(temperature=0, model='gpt-4', max_tokens=250)
messages = [
ChatMessage(role='system', content='Talk like a hippie'),
ChatMessage(role='user', content='Tell me about AI'),
]
response = llm.chat(messages)
print(response)
8. Comparing LLM models
3. LlamaIndex dive in deeper
1. Building blocks of LlamaIndex
…