Shubham Saboo
Shubham Saboo

@Saboo_Shubham_

10 Tweets 9 reads May 02, 2024
Build a LLM app with RAG to chat with PDF in just 30 lines of Python Code
(step-by-step instructions):
1. Import necessary libraries
• Streamlit for building the web app
• Embedchain for the RAG functionality
• tempfile for creating temporary files and directories
2. Configure the Embedchain App
Select the LLM and embedding provider as OpenAI, you can choose from cohere, anthropic or any other of your choice.
Select the vector database as the opensource chroma db (you are free to choose any other vector database of your choice).
3. Set up the Streamlit App
Streamlit lets you create user interface with just python code, for this app we will:
• Add a title to the app using 'st.title()'
• Create a text input box for the user to enter their OpenAI API key using 'st.text_input()'
4. Initialize the Embedchain App
• If the OpenAI API key is provided, create a temporary directory for the vector database using 'tempfile.mkdtemp()'
• Initialize the Embedchain app using the 'embedchain_bot' function
5. Upload a PDF file from UI and add it to the knowledge base
• Use 'st.file_uploader()' to create a file uploader for PDF files.
• If a PDF file is uploaded, create a temporary file and write the contents of the uploaded file to it.
6. Ask question about the PDF and display the answer
• Create a text input for the user to enter their question using 'st.text_input()'
• If a question is asked, get the answer from the Embedchain app and display it using 'st.write()'
Full RAG Application Code to Chat with PDF👇
Working Application demo using Streamlit
Paste the above code in vscode and run the following command: 'streamlit run chat_pdf.py'
If you find this useful, RT to share it with your friends.
Don't forget to follow me @Saboo_Shubham_ for more such LLMs tips and tutorials.

Loading suggestions...