Inspiration

In the world of marketing, crafting compelling copy is both art and labor. We noticed that many startups and small businesses struggle to keep up with content demands – producing taglines, product descriptions, and social posts can feel like a full-time job without a dedicated creative team . At the same time, the Google Cloud ADK Hackathon’s focus on multi-agent collaboration for content creation sparked our imagination. As the hackathon prompt noted, “the real magic happens when [AI agents] collaborate to tackle complex tasks… think … content creation” . We were inspired by examples like a Social Media Branding Agent that turned a simple prompt into an entire cross-platform post (with caption, image, video) in one click . What if we could apply a similar multi-agent approach to general product marketing? Gemini Marketing Taskforce was born from this vision – an AI trio working together like a mini marketing team, ready to brainstorm and generate copy for any product.

What it does

Gemini Marketing Taskforce is an autonomous trio of AI agents – a Strategist, a Copywriter, and a Reviewer – that collaboratively generate polished marketing content for a given product. The user starts by providing basic information about a product (e.g. product name, key features, target audience). From there, the Strategist Agent takes the lead: it analyzes the input and formulates a brief marketing strategy, identifying the product’s unique selling points, the desired tone, and key messages. Next, the Copywriter Agent uses that strategy to draft the marketing copy. This can include a catchy tagline, a concise product description, and a short promotional blurb or social media post tailored to the target audience. Finally, the Reviewer Agent evaluates and refines the copy. It checks for clarity, tone consistency, and impact – much like an editor, it ensures the content aligns with the strategy and is error-free and engaging. The end result is a set of ready-to-use marketing copy pieces, all generated in one cohesive workflow. Essentially, our project simulates a marketing team’s collaboration: the strategist plans, the copywriter creates, and the reviewer polishes. By having specialized AI agents handle each role, the system can automate and enhance the marketing content creation lifecycle in a way that’s both efficient and high-quality . For example, given a prompt about a new eco-friendly water bottle, the taskforce might output a tagline like “Hydration Meets Sustainability,” a product description highlighting its insulating design and recyclable materials, and a peppy social post that resonates with eco-conscious consumers – all consistent in voice and strategy.

How I built it

I built the project using Google’s open-source Agent Development Kit (ADK), which is designed for exactly this kind of multi-agent orchestration. ADK allowed us to define each agent with a specialized role and have them interact in a coordinated workflow . Our architecture follows a sequential pipeline: the Strategist agent’s output feeds into the Copywriter agent, and then the Reviewer agent operates on the Copywriter’s draft. We used ADK’s SequentialWorkflow capabilities to ensure the agents execute in order and share data smoothly. Each agent is implemented as an LlmAgent under the hood, powered by large language models. Specifically, we tapped into Google’s state-of-the-art Gemini model through Vertex AI to fuel the creativity and intelligence of our agents . The Strategist agent prompt is engineered to produce a concise marketing plan (focusing on audience, tone, key points), the Copywriter agent prompt is tuned to expand that plan into persuasive text, and the Reviewer agent prompt is designed to critique and improve the text (we gave it guidelines to enforce brevity, active voice, and brand-appropriate tone). We took advantage of ADK’s support for tool integrations as well – for instance, the Strategist agent can call external tools if needed, such as a web search for trending keywords or a BigQuery lookup for relevant industry data. (ADK’s rich tool ecosystem makes it easy to plug in such capabilities .) In our case, we incorporated a simple BigQuery query mechanism so the Strategist could optionally pull insights like top customer interests for the product category, inspired by how marketing teams leverage analytics. The Google Cloud platform was instrumental in our build: we used Vertex AI to access the large models (ensuring we could switch between Gemini or other models from the Model Garden), and we containerized the whole app and deployed it on Cloud Run for a scalable, serverless demo. This means anyone can hit our endpoint (or UI) to invoke the multi-agent workflow live, with the agents orchestrated on the backend. Our architecture diagram submitted shows how the pieces fit: the front-end (a simple web form) sends a request to our Cloud Run API, which triggers the ADK agents sequence; the agents may interact with Vertex AI, and any data tools (like BigQuery) if needed, then return the final content. By leveraging ADK and Google Cloud, we were able to build this complex system in a matter of weeks – ADK handled the heavy lifting of agent management, letting us focus on crafting good prompts and logic for each agent. (In fact, we found that giving the agents well-defined schemas for their input/output – e.g. a structured strategy outline object – was critical; this echoes insights from other ADK projects where clearly defined data models made agent coordination more reliable .) Overall, our build marries multi-agent AI design with Google Cloud’s AI ecosystem: the result is a robust application where three AI agents work in harmony to generate content that’s more thoughtful than what a single agent could produce alone.

Challenges we ran into

• Orchestrating Multi-Agent Communication: Designing the hand-off between agents was tricky. We needed the Strategist’s output to be just detailed enough for the Copywriter to use, but not overly restrictive. We iterated on the format of that hand-off (ultimately using a structured JSON with fields like target_audience, key_messages, etc.) to ensure the Copywriter agent got clear guidance. Ensuring each agent “understood” the others’ outputs required careful prompt engineering and use of ADK’s data schemas.
• Maintaining Consistent Tone and Facts: Generating marketing copy that is catchy yet accurate was a balancing act. Early on, we found the Copywriter agent might produce text that sounded great but included an exaggerated claim or drifted from the product info. The Reviewer agent’s challenge was to catch these issues. We had to refine the Reviewer’s prompt to enforce fact-checking against the original product details and to uphold a consistent tone. This was essentially designing an AI editor – not an easy feat, as it needed to be critical but not overcorrect or strip away the creative flair.
• Integrating Google Cloud Services: Using multiple Google Cloud services together introduced some devops challenges. For example, connecting to Vertex AI endpoints required proper authentication and permission setup in our Cloud Run container. We also enabled BigQuery access; even though our use of it was light, configuring service accounts and roles for the agents to query BigQuery securely took time. Debugging these cloud integration issues (e.g. ensuring our service account JSON was correctly mounted in Cloud Run) was a less glamorous but necessary part of the build.
• Performance and Cost Considerations: With three agents running sequentially (and each calling a large LLM), we were mindful of latency and quota limits. We faced the challenge of optimizing prompts to keep responses concise (so that the next agent could process quickly) and considered using smaller/faster model variants for certain agents. We tested “Gemini-light” models for the Reviewer agent, for instance, to reduce total runtime. In a few cases, we also had to handle API rate limits and errors – e.g. if one agent’s generation took too long or hit a token limit, we implemented a retry or fallback strategy. Ensuring the pipeline runs reliably within the 3-minute demo video window was a challenge, but we managed to get it under that time by caching certain steps during demo (for example, reusing the Strategist output if it was already generated for the same input).

Accomplishments that we’re proud of

• Three Agents, One Cohesive Team: We successfully built a multi-agent system where each agent’s expertise complements the others. This was more than just chaining three prompts – the agents truly behave like a team, and seeing the first end-to-end run where the Strategist’s plan turned into a polished ad blurb was a “wow” moment. We’re proud that our orchestrated agents approach worked on the first try during our demo, proving the viability of collaborative AI for content creation.
• High-Quality Marketing Copy Generation: The quality of the output exceeded our expectations. The copy reads as if crafted by a human marketing team – it’s catchy, coherent, and aligned with the intended brand voice. For instance, in one test for a fictitious smart coffee mug, the system coined the tagline “Your Perfect Sip, Every Trip” and wrote a compelling description highlighting the mug’s temperature control and on-the-go convenience, all with a friendly, upbeat tone. Achieving this level of creativity and consistency from AI agents is something we’re very proud of.
•  Deep Integration with Google Cloud Tech: We didn’t just use ADK in isolation; we went the extra mile to integrate multiple Google Cloud tools, which was encouraged in this hackathon. We’re proud to have leveraged Vertex AI’s newest model (Gemini) as our content engine – it handled the nuanced language generation impressively. We also containerized our app and deployed on Cloud Run, meaning our project isn’t just a local prototype but a live cloud-based application. Additionally, we set up BigQuery and Cloud Storage support for future data enhancements. By hitting these marks, we feel we maximized the use of Google’s ecosystem – something the hackathon organizers gave bonus points for (using Google tech, contributing to open source, etc.) .
• Rapid Development & Open Source: In just a few weeks, we went from idea to working prototype. Along the way, we kept our code modular and clean. We’re proud that we’ve open-sourced the project repository, complete with documentation and an architecture diagram, so others can learn from or build upon our work. In the spirit of community, we even wrote a short Medium article about our approach (and referenced some Google ADK sample projects) to contribute back knowledge. Seeing positive feedback from peers and interest in using our “Marketing Taskforce” approach in other domains (like someone mentioned it could help generate e-commerce product listings) was very rewarding.

What we learned

• 🡒 Power of Multi-Agent Design: This project was a crash course in designing collaborative AI agents. We learned first-hand that breaking a complex task into specialized roles can lead to better outcomes. Each agent could focus on a sub-problem (planning, writing, or editing), which made prompt design more manageable and the overall output more coherent. This validated the multi-agent approach for us – it’s not just hype. We saw how agents, like humans, benefit from division of labor and can even correct each other’s mistakes.
• 🡒 Importance of Clear Interfaces: We discovered how crucial it is to define clear interfaces and data structures for agents to communicate. Early on, we had agents passing raw text in a loose format and it caused confusion (the Copywriter sometimes misunderstood the Strategist’s intent). We then applied a lesson that other ADK developers have noted: use structured data models to exchange info . By using, for example, a Pydantic schema (in Python) for the strategy output, we ensured the Copywriter got a well-defined input (like {"audience":"...","message_points":[...],...}). This made the pipeline far more reliable. In short, we learned to treat agent outputs as APIs – with proper contracts – rather than free-form text.
• 🡒 Prompting and Iteration: Even with powerful LLMs, how you prompt them makes all the difference. We went through numerous prompt iterations for each agent. We learned to phrase the Strategist’s instructions in a way that yields a useful game plan instead of generic advice. For the Copywriter, we experimented with example-based prompts versus direct instructions and learned that providing a short example of desired output format boosted its performance. And for the Reviewer, we learned to strike a balance in the prompt: if it’s too lenient, it misses errors; if it’s too harsh, it over-edits the copy. Crafting the right “persona” and criteria for the Reviewer was educational – we even borrowed techniques from how human editors work (like having it read the text aloud internally to catch awkward phrasing). The iterative prompt tuning process taught us a lot about effective communication with AI.
• 🡒 ADK & Cloud Tools Mastery: On the technical side, we gained plenty of know-how in using the ADK and Google Cloud services together. We learned how ADK’s workflow agents (Sequential and Parallel) function and how to debug multi-agent flows using the ADK CLI and web UI – watching the agents step through their tasks was enlightening and helped us fix logic bugs. We also learned best practices for deploying AI workloads: for example, packaging models and tools in a container, using Google Cloud’s IAM roles for secure API access, and scaling a Cloud Run service. Additionally, working with the (still evolving) Gemini model taught us how to handle an AI model in preview – we had to read up on its parameters and adjust for any model quirks (like it sometimes being too verbose). All these experiences have sharpened our skills in multi-agent system development and cloud deployment.

What’s next for Gemini Marketing Taskforce

Our project opened up many exciting directions. Here’s what the future could hold: • Expand to Multi-Modal Marketing: Right now, our taskforce focuses on textual content. A clear next step is to incorporate visual and audio generation. We’d like to add an Image Designer Agent and perhaps a Video Script Agent to the team. Imagine the Strategist and Copywriter coming up with a campaign slogan and blurb, and then an Image Agent creating a matching graphic or ad banner, followed by a Video Agent generating a short promo clip or voice-over. In fact, other hackathon projects have shown the feasibility of this – one team’s agent could produce not just captions but also images, videos, and even voice-overs for a brand post . Enabling our agents to generate or coordinate with visual creatives would make the Gemini Marketing Taskforce a one-stop solution for campaign assets. For example, along with copy, it could output a generated product hero image with the tagline overlaid – ready for social media. We plan to leverage Google’s image generation APIs or third-party generative models (and of course coordinate them via ADK) to make this happen. • Data-Driven Personalization: Another promising direction is integrating deeper with data to personalize and strategize better. In marketing, data is king – companies have tons of it in analytics platforms like Google Analytics 4, often stored in BigQuery. We envision a future agent in the taskforce (or an extension of the Strategist) that can tap into such data to glean insights. For instance, it could identify which product features resonate most with a certain demographic, or find trending consumer interests, and then tailor the copy accordingly. One demo by Google Cloud showed how an agent could query GA4 data in BigQuery to extract audience insights and inform campaign strategy . We’d like to do something similar: enabling marketers to ask natural-language questions (e.g. “What do 18-25 year-olds say about our product in reviews?”) and have the agent pull the answer from data to refine the marketing angle. This could be achieved by integrating a natural language to SQL tool or using Vertex AI’s analysis capabilities so that our Strategist agent can ground its plans in real customer data. The result would be marketing copy that isn’t just generically good, but hyper-targeted and backed by data. • Interactive Collaboration & Feedback Loops: Currently, our system generates content in one go. In the future, we want to make it more interactive and iterative. That means allowing the user (or a marketer) to engage in a dialogue with the agents. You could imagine after the first output, the marketer says “Actually, emphasize the eco-friendly aspect more,” and the agents would loop back – the Strategist might adjust the strategy, the Copywriter revises the copy, and the Reviewer checks the new version. This kind of feedback loop would make the tool more of a collaborative partner (like a human team) than a one-shot generator. ADK already supports dynamic agent interactions within a session, so implementing a chat refinement cycle is within reach. We’d need to maintain state and ensure agents can incorporate user feedback mid-process. Achieving this would make the taskforce feel truly intelligent and user-responsive. • Wider Range of Content & Channels: We designed the system for short-form marketing copy, but the framework could extend to other content formats. “What about a blog post or an email newsletter?” – we’ve thought about introducing agents for long-form content. For example, a Blog Writer Agent that takes the strategist’s outline and produces a full article, or an Email Draft Agent for crafting marketing emails. With some tweaks, the copywriter agent could be repurposed for these formats (or new specialized agents added). We’d also love to adapt the outputs to various channels automatically – akin to how one project formatted content for Instagram vs. Twitter differently . Our Format/Response module could be enhanced to output platform-specific versions of the copy (e.g. a 280-character version for Twitter/X, a more visual hashtag-rich caption for Instagram). This would further automate the multi-channel marketing efforts. • Continuous Learning and Fine-Tuning: As a longer-term goal, we want the Gemini Marketing Taskforce to learn from each campaign it generates. This could involve capturing feedback on how the generated copy performs in the real world (did the social post get high engagement? did the tagline resonate with customers?) and using that to fine-tune the agents. We could fine-tune the underlying LLM on a dataset of high-performing marketing copy, or implement a reinforcement learning loop where the Reviewer agent (or a separate Evaluator agent) scores the success of content and tweaks the approach. While this is ambitious, it aligns with the idea of an AI marketing team that gets smarter over time – eventually suggesting not just copy, but data-backed reasons for why certain messaging works best. • Production-Ready Integration: Lastly, we aim to turn this prototype into a tool that marketing teams can use in their daily workflow. That could mean building a slicker UI (perhaps a web app or a plugin for popular marketing platforms) where users input product info and get copy suggestions they can tweak and approve. We’d integrate authentication, user-specific settings (like brand voice profiles), and collaboration features so multiple team members could review the AI output. Deploying on Google Cloud has already set the stage for scaling – we can handle multiple requests and ramp up instances as needed. What remains is to harden the system (better error handling, security for data connections) and possibly go through compliance checks if used in enterprise settings. The fact that we built this with cloud-native services means we can relatively easily evolve it into a reliable SaaS offering for content generation. Imagine a “Marketing Taskforce” button inside a CMS or ad management tool – that’s the dream scenario we’ll work towards.

In summary, Gemini Marketing Taskforce has proven how powerful multi-agent systems can be for content creation. We drew on the latest ideas and tools – from Google’s Gemini model to ADK orchestration patterns – and even learned from fellow hackathon projects to shape our approach. The result is a cooperative AI trio that can generate marketing copy with creativity, strategy, and polish. We’re excited to continue developing this project beyond the hackathon, pushing the boundaries of what AI-agents can do in the marketing domain, and hopefully making life easier for all the marketers and creators who are ready to welcome their new AI teammates!

Built With

Share this project:

Updates