margdarshak-http-cache | ||
src | ||
.gitignore | ||
Cargo.lock | ||
Cargo.toml | ||
README.md |
Margdarshak
Steps to Run
To run the project, use the following commands:
# Run the project
API_KEY=<openai api key> cargo run
The project uses the following dependencies:
anyhow
: For error handling.async-trait
: For async functions in traits.bytes
: For working with byte buffers.http-body-util
: For working with HTTP bodies.hyper
: For HTTP server and client implementation.tokio
: For asynchronous runtime.tracing
: For application-level tracing and logging.
Project Logic
The project is structured to handle HTTP requests and process them using a wizard model. Here is a brief overview of the logic:
-
Initialization: The
run
function initializes theTargetRuntime
with HTTP, file system, and environment IO components. It then scrapes data from a specified URL and processes it into a query. -
Wizard Interaction: A
Wizard
instance is created using the processed query and an API key from the environment. The wizard is used to ask questions and get responses. -
HTTP Server: The
AppCtx
struct, which holds theWizard
instance and the scraped data, is passed to the HTTP server. The server listens for incoming requests. -
Request Handling: When a request is received, the
handle_request
function processes the request body, creates aQuestion
instance, and uses the wizard to get a response. The response is then sent back to the client.
This setup allows the project to dynamically process and respond to HTTP requests using the wizard model.