Hey guys, I tried to debug a bug in OpenwebUI for the last day or two which lead to a "Waterfall" repetition of content when trying to connect my own Agent (which should be OpenAI compatible) as an OpenAI model. I verified that my SSE stream is correct and wanted to debug it in OpenwebUIs repo, but I got lost in the middleware of the backend.
I found it quite hard to comprehend with a lot of inner functions, factories, limited modularisation, limited doc string or in-code documentation. A static analysis reveals a significant gap between the size of the implementation and the existing test suite.
Key Metrics
-
Total Backend Lines of Code (Python): ~73,652
-
Total Test Lines of Code: ~1,747
-
Estimated Test-to-Code Ratio: ~2.4%
-
Total Test Files found: 7
Many critical modules have no dedicated unit or integration tests in the backend/open_webui/test directory:
| Component | LOC | Status |
|---|---|---|
| open_webui/main.py | 2,426 | ❌ No Tests |
| open_webui/config.py | 4,024 | ❌ No Tests |
| open_webui/utils/middleware.py | 3,758 | ❌ No Tests |
| open_webui/retrieval/ | ~6,500+ | ❌ No Tests |
| open_webui/routers/ | 21/25 files | ❌ No Tests |
Only a few specific areas have existing tests:
-
Auths: test_auths.py
-
Users: test_users.py
-
Models: test_models.py
-
Prompts: test_prompts.py
-
Storage: test_provider.py
-
Redis Utility: test_redis.py
Any advice on how to approach debugging this?
Any other testing strategy that I am not aware that you guys use to get a robust solution?
If not, do you think adding more unit tests and some refactoring to specifically these critical components could be something thats on the roadmap? Happy to help as well.