Introduction
This project is lightweight Streamlit application that wraps a LangChain-powered chat interface. Users can converse with an OpenAI model in the browser, while conversation history is preserved with Streamlit session state. Responses are streamed token-by-token for a responsive feel.
Features
- Streaming chat responses using
langchain-openaiandStrOutputParserfor incremental output. - Persistent conversation history stored in
st.session_stateso earlier turns stay visible during a session. - Chat-style UI built with
st.chat_messageandst.chat_inputcomponents. - Environment-driven configuration that loads secrets (like
OPENAI_API_KEY) from a local.envfile viapython-dotenv.
Project structure
.
├── main.py # CLI entry that prints a greeting (not used by Streamlit)
├── src/
│ └── main.py # Streamlit app with chat logic
├── pyproject.toml # Project metadata and dependencies
└── uv.lock # Resolved dependency lockfile for uv/pip
Or see the full source code on GitHub.
Requirements
- Python 3.9+
- An OpenAI API key with access to chat models
- The dependencies listed in
pyproject.toml(Streamlit, LangChain, python-dotenv, etc.)
Setup
-
Install dependencies (use your preferred installer):
# Using pip pip install -e . # Or using uv uv pip install -e . -
Create a
.envfile in the project root with your API key:echo "OPENAI_API_KEY=sk-..." > .env -
Verify environment variables are loaded (the app calls
load_dotenv()at startup). Your key must be available asOPENAI_API_KEYbefore the first chat request.
Running the app
Start the Streamlit UI pointed at the application module:
streamlit run src/main.py
The app sets a page title, icon, and wide layout. Once running, open the provided local URL in your browser to begin chatting.
How it works
- On launch,
src/main.pyloads environment variables and initializesst.session_state["chat_history"]to hold a list of LangChainHumanMessageandAIMessageobjects. 【F:src/main.py†L11-L69】 - The helper
get_chat_responsebuilds a simple prompt template that includes prior turns and the latest user input, then streams the model response viaChatOpenAIconfigured with your API key. 【F:src/main.py†L27-L44】 - Previous messages are rendered with
st.chat_message, preserving user/assistant roles. User input is collected withst.chat_input, appended to history, and the streamed assistant response is written to the page and stored. 【F:src/main.py†L47-L69】
Tips for use
- Keep your
.envfile out of version control to avoid leaking keys. - Because responses stream, network interruptions may leave partial output; resubmit if needed.
- To reset the conversation, clear the
chat_historyin the sidebar (if using Streamlit's session state inspector) or restart the app.
Development notes
main.pyat the repository root is only a placeholder CLI that prints a greeting; the Streamlit experience lives insrc/main.py.- No automated tests are included. When contributing features, consider adding unit or integration tests around prompt construction and session behavior.
- Follow Streamlit's guidance for secrets management in production deployments (e.g., environment variables or Streamlit Community Cloud secrets).
