Skip to main content
r/Python icon

r/Python pythonLogo pythonLogo

Pycon US 2025 starts next week!

members
online

Why does it take 3 hours to read my own email with Python in 2026? Why does it take 3 hours to read my own email with Python in 2026?
Discussion

Here is what Google thinks is a reasonable developer experience:

  1. Create a Google Cloud project

  2. Enable the Gmail API (but also the People API for contacts, and the Calendar API separately)

  3. Configure an OAuth consent screen - answer 15 questions about your "app" when all you want is to read your own inbox

  4. Create OAuth credentials - but wait, which type? Web? Desktop? Service account? The docs point you in circles

  5. Download a credentials JSON file, put it somewhere your script can find it

  6. Write the OAuth flow code - handle token refresh, token expiry, token storage

  7. Run it, get a browser redirect, approve permissions on a consent screen that warns users YOUR OWN APP is "unverified"

  8. Pray the token doesn't expire while you're still debugging step 6

And that's the HAPPY PATH. Half the tutorials are outdated, reference deprecated libraries, or skip critical steps. The official Google docs read like they were written by three different teams who never talked to each other.

I've been building software for close to 40 years. I've shipped systems in healthcare, security, enterprise - real production environments. And I genuinel struggled with this. Not because the concepts are hard, but because the process is needlessly hostile to developers.

The real kicker: I also need my users to be able to connect their Gmail accounts. So "just use a service account" or "just use an App Password" doesn't cut it.

I need actual OAuth that works for end users, which means going through Google's verification process, which is its own special circle of frustration.

All I want is a Python library where I can write something like:

gmail = connect_to_gmail()

emails = gmail.inbox(last=50)

contacts = gmail.contacts()

calendar = gmail.events(next_week=True)

Instead I'm drowning in credential files, token pickle storage, scope strings, and consent screen configurations.

Has anyone found a genuinely good solution for this? A library or service that wraps the OAuth pain and just lets you work with Gmail/Contacts/Calendar data?

Something your users can set up without a 3-hour onboarding session?

I keep thinking someone must have solved this by now. Because right now, Google has effectively made it harder to programmatically access your own email than it was 20 years ago with IMAP.


(Rant) AI is killing programming and the Python community (Rant) AI is killing programming and the Python community
Meta

I'm sorry but it has to come out.

We are experiencing an endless sleep paralysis and it is getting worse and worse.

Before, when we wanted to code in Python, it was simple: either we read the documentation and available resources, or we asked the community for help, roughly that was it.

The advantage was that stupidly copying/pasting code often led to errors, so you had to take the time to understand, review, modify and test your program.

Since the arrival of ChatGPT-type AI, programming has taken a completely different turn.

We see new coders appear with a few months of experience in programming with Python who give us projects of 2000 lines of code with an absent version manager (no rigor in the development and maintenance of the code), comments always boats that smell the AI from miles around, a .md boat also where we always find this logic specific to the AI and especially a program that is not understood by its own developer.

I have been coding in Python for 8 years, I am 100% self-taught and yet I am stunned by the deplorable quality of some AI-doped projects.

In fact, we are witnessing a massive arrival of new projects that are basically super cool and that are in the end absolutely null because we realize that the developer does not even master the subject he deals with in his program, he understands that 30% of his code, the code is not optimized at all and there are more "import" lines than algorithms thought and thought out for this project.

I see it and I see it personally in the science given in Python where the devs will design a project that by default is interesting, but by analyzing the repository we discover that the project is strongly inspired by another project which, by the way, was itself inspired by another project. I mean, being inspired is ok, but here we are more in cloning than in the creation of a project with real added value.

So in 2026 we find ourselves with posts from people with a super innovative and technical project that even a senior dev would have trouble developing alone and looking more closely it sounds hollow, the performance is chaotic, security on some projects has become optional. the program has a null optimization that uses multithreads without knowing what it is or why. At this point, reverse engineering will no longer even need specialized software as the errors will be aberrant. I'm not even talking about the optimization of SQL queries that makes you dizzy.

Finally, you will have understood, I am disgusted by this minority (I hope) of dev who are boosted with AI.

AI is good, but you have to know how to use it intelligently and with hindsight and a critical mind, but some take it for a senior Python dev.

Subreddits like this are essential, and I hope that devs will continue to take the time to inquire by exploring community posts instead of systematically choosing ease and giving blind trust to an AI chat.


A pure Python HTTP Library built on free-threaded Python A pure Python HTTP Library built on free-threaded Python
Showcase

Barq is a lightweight HTTP framework (~500 lines) that uses free-threaded Python (PEP 703) to achieve true parallelism with threads instead of async/await or multiprocessing. It's built entirely in pure Python, no C extensions, no Rust, no Cython using only the standard library plus Pydantic.

from barq import Barq

app = Barq()

@app.get("/")
def index():
    return {"message": "Hello, World!"}

app.run(workers=4)  # 4 threads, not processes

Benchmarks (Barq 4 threads vs FastAPI 4 worker processes):

Scenario Barq (4 threads) FastAPI (4 processes)
JSON 10,114 req/s 5,665 req/s (+79%)
DB query 9,962 req/s 1,015 req/s (+881%)
CPU bound 879 req/s 1,231 req/s (-29%)

Target Audience

This is an experimental/educational project to explore free-threaded Python capabilities. It is not production-ready. Intended for developers curious about PEP 703 and what a post-GIL Python ecosystem might look like.

Comparison

Feature Barq FastAPI Flask
Parallelism Threads (free-threaded) Processes (uvicorn workers) Processes (gunicorn)
Async required No Yes (for perf) No
Pure Python Yes No (uvloop, etc.) No (Werkzeug)
Shared memory Yes (threads) No (IPC needed) No (IPC needed)
Production ready No Yes Yes

The main difference: Barq leverages Python 3.13's experimental free-threading mode to run synchronous code in parallel threads with shared memory, while FastAPI/Flask rely on multiprocessing for parallelism.

Source code: https://github.com/grandimam/barq

Requirements: Python 3.13+ with free-threading enabled (python3.13t)