Inspiration

Open your ~/Downloads directory. Or your Desktop. It's probably a mess...

There are only two hard things in Computer Science: cache invalidation and naming things.

What it does

LlamaFS is a self-organizing file manager. It uses Llama 3 to scan the contents of your file, and derive and name and directory semantically meaningful and easy for humans to navigate. automatically renames and organizes your files based on their contents and well-known conventions.

LlamaFS runs in two "modes" - as a batch job (batch mode), and an interactive daemon (watch mode).

In batch mode, you point to a directory and let LlamaFS do its work. We present the, to individuall accept or reject files

In watch mode, LlamaFS starts a daemon that watches your directory. It intercepts all filesystem operations, updates i and uses your most recent edits in context to proactively learn and how, so you don't learns predict how you rename file. e.g. if you create a folder for 2023 tax documents, and start moving 1-3 file in it, LlamaFS will automatically creates, and move the right!

Uhh... Sending all my personal files to an API?! No thank you!

No worries, we got you - LlamaFS has an "incognito" toggle, giving you the freedom to route all your requests through Ollama (for privacy) or Groq (for speed). Since they both use the same Llama 3 model, they perform identically.

LlamaFS also supports image files througvia Moondream2 (which, again, is run locally with Ollama), to handle all those Screenshot 2024-03-06 at 7.19.37 PM files for you.

How we built it

  • Llama
  • Groq and Ollama for inference
  • LlamaIndex to load
  • Wandb weave for debugging/monitoring
  • Custom mapreduce chain to generate summaries for each file in parallel (which chunk-level cache to handle small file diffs)

We built LlamaFS on a Python backend, leveraging the Llama3 model through Groq for file content summarization and tree structuring. For local processing, we integrated Ollama running the same model to ensure privacy in incognito mode. The frontend is crafted with Electron, providing a sleek, user-friendly interface that allows users to interact with the suggested file structures before finalizing changes.

Accomplishments that we're proud of

  • It's extremely fast! (for LLM standards)! Most file ops are processed in <500ms in watch mode. This is because of our smart caching, that selectively rewrites sections of the index based on the minimum nessecary filesystem diff. And of course, Groq's super fast inference API. 😉

  • It's immediately useful. It solves a real problem almost everyone that's uses a computer has. It's very low friction to use, and doesn't require any prompting or knowledge of LLMs/AI. In fact, the minute we had an MVP, we started using it to auto-name our Jupyter notebooks for this project (very Meta)!

What's next for LlamaFS

  • Find and remove old/unused files
  • We have some really cool ideas for displaying directory diffs in the UI for. We decided to focus on the algorithm for this hackathon.
  • Add support for more file extensions

Built With

  • groq
  • llama-3
  • llamaindex
  • moondream
  • ollama
Share this project:

Updates