<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Mohammad Jamalianpour on Medium]]></title>
        <description><![CDATA[Stories by Mohammad Jamalianpour on Medium]]></description>
        <link>https://medium.com/@jamalianpour?source=rss-567751062fb9------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 16 Apr 2026 19:10:43 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@jamalianpour/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Semantic LLM Cache: Vector-Based Caching for Java (Spring Boot)]]></title>
            <link>https://jamalianpour.medium.com/semantic-llm-cache-vector-based-caching-for-java-spring-boot-648d4435b7b8?source=rss-567751062fb9------2</link>
            <guid isPermaLink="false">https://medium.com/p/648d4435b7b8</guid>
            <category><![CDATA[spring-boot]]></category>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[java]]></category>
            <dc:creator><![CDATA[Mohammad Jamalianpour]]></dc:creator>
            <pubDate>Tue, 03 Feb 2026 17:04:18 GMT</pubDate>
            <atom:updated>2026-02-03T19:44:29.287Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4P9Fopsvy_yUklsyD7dF3Q.png" /><figcaption>Semantic LLM Cache for Spring Boot</figcaption></figure><h3>How Vector Embeddings Can Slash Your LLM API Costs by 80%</h3><p>If you’re building applications powered by large language models, you’ve probably noticed something painful: your API bill keeps growing. Not because your application is doing anything particularly complex, but because users keep asking variations of the same questions, and each variation triggers a fresh API call.</p><p>This is the hidden tax of working with LLMs, and today I want to show you how to solve it with semantic caching.</p><h3>The Problem with Traditional Caching</h3><p>Let’s say you’re building a customer support chatbot. A user asks, “How do I reset my password?” Your application calls the OpenAI API or any other LLMs provider, gets a response, and you wisely decide to cache it. Smart move.</p><p>But then another user comes along and asks, “I forgot my password, what should I do?” Your cache looks at this query, compares it character by character with what’s stored, and finds no match. So off goes another API request. You pay again and the user waits again.</p><p>Here’s what traditional caching sees:</p><pre>&quot;How do I reset my password?&quot; ≠ &quot;I forgot my password, what should I do?&quot;</pre><p>Different strings. Cache miss. End of story.</p><p>Now imagine this happening hundreds or thousands of times per day across all the different ways people phrase the same questions. Your cache hit rate stays frustratingly low, and your costs stay frustratingly high.</p><p>The fundamental issue is that traditional caching operates on syntax (the exact sequence of characters) when what we really care about is semantics the meaning behind those characters.</p><h3>Enter Semantic Caching</h3><p>What if your cache could understand that “How do I reset my password?” and “I forgot my password, what should I do?” are essentially asking the same thing? What if it could recognize meaning, not just text?</p><p>This is what semantic caching does. Instead of storing queries as raw strings, it converts them into vector embeddings mathematical representations that capture the semantic meaning of text. When a new query arrives, the system converts it to a vector and searches for similar vectors in the cache. If it finds one above a certain similarity threshold, it returns the cached response without ever touching the LLM API.</p><p>This is happens because embedding models are trained to place semantically similar text close together in vector space. “Reset my password” and “forgot my password” end up as neighboring vectors, even though they share few common words.</p><h4>How It Works Under the Hood</h4><pre><br>┌─────────────────────────────────────────────────────────────────┐<br>│                      User Query                                 │<br>│              &quot;What&#39;s the capital of France?&quot;                    │<br>└─────────────────────────────────────────────────────────────────┘<br>                              │<br>                              ▼<br>┌─────────────────────────────────────────────────────────────────┐<br>│                   L1 Cache (Exact Match)                        │<br>│                    Caffeine In-Memory                           │<br>│                      &lt; 1ms lookup                               │<br>└─────────────────────────────────────────────────────────────────┘<br>                              │ Miss<br>                              ▼<br>┌─────────────────────────────────────────────────────────────────┐<br>│                    Embedding Provider                           │<br>│            Convert text → [0.023, -0.891, 0.445, ...]           │<br>│               (ONNX, OpenAI, Ollama, Azure)                     │<br>└─────────────────────────────────────────────────────────────────┘<br>                              │<br>                              ▼<br>┌─────────────────────────────────────────────────────────────────┐<br>│                 L2 Cache (Semantic Search)                      │<br>│              Vector similarity search in storage                │<br>│              (Redis, Elasticsearch, In-Memory)                  │<br>│                                                                 │<br>│   Cached: &quot;Tell me France&#39;s capital&quot; → similarity: 0.94 ✓       │<br>└─────────────────────────────────────────────────────────────────┘<br>                              │ Hit!<br>                              ▼<br>┌─────────────────────────────────────────────────────────────────┐<br>│                     Return: &quot;Paris&quot;                             │<br>│                  Total time: ~10-50ms                           │<br>│                  Cost: $0.00                                    │<br>└─────────────────────────────────────────────────────────────────┘</pre><p>The dual-level architecture is key:</p><ul><li><strong>L1 Cache</strong>: Exact matches with sub-millisecond latency</li><li><strong>L2 Cache</strong>: Semantic matches with millisecond latency (Vector storage)</li></ul><h3>Introducing Semantic LLM Cache for Java</h3><p>Semantic LLM Cache is my open-source library that brings this capability to Java and Spring Boot applications. It handles the complexity of embedding generation, vector storage, and similarity search, exposing a simple interface that feels native to Spring developers.</p><p>Let me walk you through how it works and how to integrate it into your application.</p><h3>Getting Started</h3><p>First, add the dependencies to your project. You’ll need three components: the Spring Boot starter, a storage backend, and an embedding provider.</p><p><a href="https://github.com/Jamalianpour/semantic-llm-cache">GitHub - Jamalianpour/semantic-llm-cache: Semantic caching for LLM API responses in Spring Boot applications</a></p><p>Each component is modular by design. You can swap storage backends or embedding providers without changing your application code. This matters because your needs will evolve, you might start with in-memory storage for development and move to Redis or Elasticsearch for production.</p><h3>Configuration</h3><p>Next, configure the cache in your application.yml:</p><pre>semantic-cache:<br>  embedding:<br>    provider: openai<br>    api-key: ${OPENAI_API_KEY}        # Your OpenAI API key<br>    model: text-embedding-3-small     # Cost-effective embedding model<br>  storage:<br>    type: memory                       # In-memory for development<br>  defaults:<br>    similarity-threshold: 0.92         # How similar queries must be (0.0 to 1.0)<br>    ttl: 24h                           # How long cached responses live</pre><p>The similarity-threshold parameter is crucial. Setting it to 0.92 means a query must be at least 92% similar to a cached query to be considered a match. Too low, and you&#39;ll return incorrect responses for genuinely different questions. Too high, and you&#39;ll miss opportunities to serve from cache. I&#39;ve found 0.90 to 0.95 works well for most conversational AI applications, but you should experiment with your specific use case.</p><h3>Using the Annotation</h3><p>Now comes the elegant part. To enable semantic caching on any method, simply add the @SemanticCache annotation:</p><pre>@Service<br>public class CustomerSupportService {<br><br>    private final OpenAiClient openAiClient;<br><br>    public CustomerSupportService(OpenAiClient openAiClient) {<br>        this.openAiClient = openAiClient;<br>    }<br><br>    @SemanticCache(<br>        namespace = &quot;support&quot;,    // Isolates this cache from others<br>        similarity = 0.92         // Override default threshold if needed<br>    )<br>    public String answerQuestion(String question) {<br>        // This only executes on cache misses<br>        return openAiClient.complete(question);<br>    }<br>}</pre><p>That’s genuinely all the code you need. When answerQuestion is called, the library intercepts the call, generates an embedding for the question, searches for similar cached entries, and either returns a cached response or proceeds to call your method and cache the result.</p><p>The namespace parameter creates logical separation between different caches. Your FAQ responses shouldn&#39;t interfere with your product recommendation cache, even if someone asks a similar-sounding question in both contexts.</p><h3>Understanding What Happens Under the Hood</h3><p>To use semantic caching effectively, it helps to understand the flow of operations.</p><p>When a query arrives, the library first generates a vector embedding. If you’re using OpenAI’s text-embedding-3-small model, this produces a 1536-dimensional vector — essentially a list of 1536 numbers that mathematically represent the meaning of your text. This embedding generation takes around 50-100ms and costs a fraction of a cent.</p><p>Next, the library searches the vector storage for similar embeddings. It uses cosine similarity, which measures the angle between two vectors. Identical vectors have a similarity of 1.0. Completely unrelated vectors approach 0.0. The search returns the most similar cached entry along with its similarity score.</p><p>If the similarity exceeds your threshold, you have a cache hit. The library returns the stored response, and your LLM API is never called. Total latency: typically under 10ms for in-memory storage, under 50ms for Redis or Elasticsearch.</p><p>If the similarity falls below your threshold (or no similar entries exist), you have a cache miss. Your method executes normally, calling the LLM API. Before returning, the library caches both the query embedding and the response for future use.</p><h3>Choosing Your Storage Backend</h3><p>The library supports three storage backends, each suited to different scenarios.</p><p><strong>In-Memory Storage</strong> keeps everything in the JVM heap using a ConcurrentHashMap. It’s fast and requires no external infrastructure, making it perfect for development, testing, and small applications. The obvious limitation is that data disappears when your application restarts, and it doesn’t work across multiple application instances.</p><pre>semantic-cache:<br>  storage:<br>    type: memory</pre><p><strong>Redis Storage</strong> uses Redis Stack’s vector search capabilities. It provides persistence, sub-millisecond latency, and works across distributed deployments. If you’re already running Redis, this is often the natural choice for production.</p><pre>semantic-cache:<br>  storage:<br>    type: redis<br>    redis:<br>      host: localhost<br>      port: 6379</pre><p>You’ll need Redis Stack (not plain Redis) because it includes the RediSearch module with vector similarity search. The easiest way to get started is with Docker:</p><pre>docker run -d --name redis-stack -p 6379:6379 redis/redis-stack:latest</pre><p><strong>Elasticsearch Storage</strong> is designed for large-scale deployments. It handles millions of vectors efficiently and integrates well if you’re already using Elasticsearch for search or logging. The HNSW algorithm provides approximate nearest neighbor search that scales remarkably well.</p><pre>semantic-cache:<br>  storage:<br>    type: elasticsearch<br>    elasticsearch:<br>      uris: http://localhost:9200</pre><h3>Choosing Your Embedding Provider</h3><p>Vector quality matters. Better embeddings lead to more accurate similarity matching, which means higher cache hit rates and fewer false positives.</p><p><strong>OpenAI</strong> provides excellent embeddings with minimal setup. The text-embedding-3-small model offers a good balance of quality and cost at $0.02 per million tokens. For applications where embedding quality is critical, text-embedding-3-large provides measurably better results at a higher price point.</p><p><strong>Azure OpenAI</strong> offers the same models through Azure’s infrastructure, which matters for enterprises with compliance requirements or existing Azure investments.</p><p><strong>Ollama</strong> lets you run embedding models locally. This eliminates embedding costs entirely and keeps all data on your infrastructure. The nomic-embed-text model produces good quality embeddings and runs efficiently on modest hardware.</p><pre>semantic-cache:<br>  embedding:<br>    provider: ollama<br>    model: nomic-embed-text<br>    ollama-base-url: http://localhost:11434</pre><p><strong>ONNX</strong> goes a step further, running models directly in your JVM without any external service. This is ideal for offline deployments or when you want to minimize dependencies. The library includes several pre-trained models:</p><pre>semantic-cache:<br>  embedding:<br>    provider: onnx<br>    onnx:<br>      pretrained-model: ALL_MINILM_L6_V2   # Fast and lightweight</pre><p>The trade-off with local models is generally quality. OpenAI’s embeddings tend to outperform open-source alternatives, though the gap has been narrowing. For many applications, the cost savings of local embeddings outweigh the modest quality difference.</p><h3>Multi-Tenant Caching</h3><p>Real applications often serve multiple users or organizations, and you typically don’t want User A’s cached responses served to User B. The library handles this through context keys.</p><pre>@SemanticCache(<br>    namespace = &quot;support&quot;,<br>    similarity = 0.92,<br>    contextKeys = {&quot;#userId&quot;}    // Isolate cache by user<br>)<br>public String answerQuestion(String question, String userId) {<br>    return openAiClient.complete(question);<br>}</pre><p>The contextKeys parameter accepts SpEL expressions that evaluate to isolation keys. The cache effectively becomes partitioned — a query from User A only matches cached entries from User A&#39;s previous queries.</p><p>You can combine multiple context keys for more complex isolation:</p><pre>@SemanticCache(<br>    namespace = &quot;support&quot;,<br>    contextKeys = {&quot;#tenantId&quot;, &quot;#department&quot;}<br>)<br>public String answerQuestion(String question, String tenantId, String department) {<br>    return openAiClient.complete(question);<br>}</pre><h3>Cache Eviction</h3><p>When underlying data changes, you need to invalidate cached responses. The @SemanticCacheEvict annotation handles this:</p><pre>@SemanticCacheEvict(<br>    namespace = &quot;faq&quot;,<br>    key = &quot;#topic&quot;,<br>    similarity = 0.85    // Evict entries similar to this topic<br>)<br>public void updateFaqContent(String topic, String newContent) {<br>    faqRepository.update(topic, newContent);<br>}</pre><p>This is particularly powerful because eviction is also semantic. Updating the “password reset” FAQ entry will evict cached responses for “reset password,” “forgot password,” and other similar queries — exactly the behavior you want.</p><p>For bulk updates, you can clear an entire namespace:</p><pre>@SemanticCacheEvict(namespace = &quot;faq&quot;, allEntries = true)<br>public void rebuildAllFaqs() {<br>    // Clears all cached FAQ responses<br>}</pre><h3>Using Without Spring Boot</h3><p>While the library is optimized for Spring Boot, the core components work in any Java application:</p><pre>// Create embedding provider<br>EmbeddingProvider embeddings = OpenAiEmbeddingFactory.create(apiKey);<br><br>// Create storage backend<br>VectorStorage storage = RedisVectorStorageFactory.create(<br>    &quot;redis://localhost:6379&quot;, <br>    1536    // Dimensions must match embedding model<br>);<br><br>// Build the cache<br>SemanticCache cache = SemanticCache.builder()<br>    .embeddingProvider(embeddings)<br>    .storage(storage)<br>    .config(CacheConfig.builder()<br>        .similarityThreshold(0.92)<br>        .ttl(Duration.ofHours(24))<br>        .build())<br>    .build();<br><br>// Use it directly<br>cache.put(&quot;How do I reset my password?&quot;, &quot;To reset your password...&quot;);<br><br>Optional&lt;CacheHit&gt; hit = cache.get(&quot;I forgot my password&quot;);<br>if (hit.isPresent()) {<br>    System.out.println(&quot;Cache hit! Similarity: &quot; + hit.get().similarity());<br>    System.out.println(&quot;Response: &quot; + hit.get().response());<br>}</pre><p>This programmatic API gives you full control and works with any framework or no framework at all.</p><h3>Practical Tips for Production</h3><p><strong>Start with a higher similarity threshold and lower it gradually.</strong> A threshold of 0.95 is conservative it only matches very similar queries. Watch your hit rate, and if it’s too low, gradually reduce the threshold. It’s easier to recover from being too strict than from serving wrong answers.</p><p><strong>Monitor false positives actively.</strong> Log cache hits along with the original cached query that matched. Review these periodically to ensure the matches make sense. One bad match that returns the wrong answer can erode user trust.</p><p><strong>Use namespaces liberally.</strong> Different types of queries benefit from different thresholds. Technical support queries might need strict matching (0.95), while casual FAQ queries can be more lenient (0.85). Separate namespaces let you tune each category independently.</p><p><strong>Consider embedding costs in your calculations.</strong> Every query generates an embedding, whether it hits the cache or not. With OpenAI’s small model, this is typically negligible ($0.02 per million tokens). But if you’re processing very high volumes, local ONNX embeddings might make more economic sense despite slightly lower quality.</p><p><strong>Warm your cache during deployment.</strong> If you have a known set of common queries, pre-populate the cache during application startup. This ensures good cache hit rates from the first request.</p><h3>Conclusion</h3><p>Semantic caching represents a fundamental shift in how we think about caching for AI applications. By operating on meaning rather than text, it unlocks cache hit rates that traditional approaches simply cannot achieve.</p><p>Semantic LLM Cache brings this capability to the Java ecosystem with a clean, modular design that respects how Spring developers build applications. Whether you’re building chatbots, FAQ systems, RAG applications, or any other LLM-powered feature, semantic caching can meaningfully reduce your costs and improve response times.</p><p>The library is open source and available on GitHub. Contributions, feedback, and feature requests are always welcome.</p><p><strong>GitHub:</strong> <a href="https://github.com/Jamalianpour/semantic-llm-cache">https://github.com/Jamalianpour/semantic-llm-cache</a></p><p><em>If you found this useful, consider giving the repository a star. It helps others discover the project and motivates continued development.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=648d4435b7b8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Reduce LLM API Costs: TOON Spring Boot Dependency]]></title>
            <link>https://jamalianpour.medium.com/reduce-llm-api-costs-toon-spring-boot-dependency-0775e12c5fe7?source=rss-567751062fb9------2</link>
            <guid isPermaLink="false">https://medium.com/p/0775e12c5fe7</guid>
            <category><![CDATA[java]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[spring-boot]]></category>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[toon]]></category>
            <dc:creator><![CDATA[Mohammad Jamalianpour]]></dc:creator>
            <pubDate>Sun, 30 Nov 2025 18:13:29 GMT</pubDate>
            <atom:updated>2025-11-30T18:13:29.321Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2H6-oB_VE0w4eVNIyl6G5Q.png" /></figure><p>Recently I faced a problem with LLMs API costs in our company project. We were using GPT-5 for analyzing customer data, and the costs were becoming too high. After investigating, I found that the main issue was not our prompts — it was the JSON format we used for sending data.</p><p>I want to share my solution and the Spring Boot library I developed to solve this problem.</p><h4>The Problem with JSON</h4><p>When you send structured data to LLM APIs like OpenAI, every character counts as tokens. JSON format has many repeated elements. Look at this example:</p><pre>{<br>  &quot;interactions&quot;: [<br>    {&quot;customerId&quot;: 1001, &quot;type&quot;: &quot;support&quot;, &quot;duration&quot;: 342, &quot;resolved&quot;: true, &quot;satisfaction&quot;: 4.5},<br>    {&quot;customerId&quot;: 1002, &quot;type&quot;: &quot;sales&quot;, &quot;duration&quot;: 567, &quot;resolved&quot;: false, &quot;satisfaction&quot;: 3.2},<br>    {&quot;customerId&quot;: 1003, &quot;type&quot;: &quot;support&quot;, &quot;duration&quot;: 234, &quot;resolved&quot;: true, &quot;satisfaction&quot;: 4.8}<br>  ]<br>}</pre><p>Each object repeats the same field names. When you have hundreds of records, this repetition wastes many tokens. In our case, we were sending this kind of data many times per day.</p><h4>Discovering TOON Format</h4><p>While researching optimization methods, I found TOON (Token-Oriented Object Notation). This format is designed specifically for reducing tokens when working with LLMs. Here is the same data in TOON:</p><pre>interactions[3]{customerId,type,duration,resolved,satisfaction}:<br>  1001,support,342,true,4.5<br>  1002,sales,567,false,3.2<br>  1003,support,234,true,4.8</pre><p>The difference is clear. Field names appear only once at the beginning. No quotes for simple values. No braces for each object. The structure is more like a table.</p><p>For 100 records, our tests showed:</p><ul><li>JSON format: approximately 4,500 tokens</li><li>TOON format: approximately 2,400 tokens</li><li>Reduction: 47%</li></ul><p>This is significant cost saving when you process many requests.</p><h3>TOON Spring Boot Library</h3><p><a href="https://github.com/Jamalianpour/toon-spring-boot">GitHub - Jamalianpour/toon-spring-boot: A Spring Boot library for converting Java objects to TOON (Token-Oriented Object Notation) format</a></p><p>Our backend systems use Spring Boot framework. I needed to create a solution that integrates well with existing code. Manual conversion was not practical for production use.</p><p>I decided to build a library with these goals:</p><ul><li>Easy integration with Spring Boot</li><li>Annotation-based configuration</li><li>Automatic format detection</li><li>Good performance for large datasets</li></ul><pre>&lt;dependency&gt;<br>    &lt;groupId&gt;io.github.jamalianpour&lt;/groupId&gt;<br>    &lt;artifactId&gt;toon-spring-boot&lt;/artifactId&gt;<br>    &lt;version&gt;0.1.0&lt;/version&gt;<br>&lt;/dependency&gt;</pre><p>Here is the basic structure I implemented:</p><pre>@ToonSerializable<br>public class InteractionRecord {<br>    @ToonField(order = 1)<br>    private Long customerId;<br>    <br>    @ToonField(order = 2)<br>    private String type;<br>    <br>    @ToonField(order = 3)<br>    private Integer duration;<br>    <br>    @ToonField(order = 4)<br>    private Boolean resolved;<br>    <br>    @ToonField(order = 5)<br>    private Double satisfaction;<br>    <br>    @ToonIgnore<br>    private String internalNotes;  // This field will not be included<br>}</pre><p>The annotations provide control over the conversion process. The order attribute is important - it ensures consistent field ordering in the tabular output, which is necessary for the format to work correctly.</p><h4>Technical Implementation Details</h4><p>The converter has different strategies for different data types. When it encounters a collection of uniform objects (like database records), it automatically uses the tabular format. For mixed-type arrays, it uses a list format. For nested objects, it maintains the hierarchy with proper indentation.</p><p>The core conversion logic examines the data structure and chooses the optimal format:</p><pre>@RestController<br>public class DataController {<br>    private final ToonConverterService toonService;<br>    <br>    @PostMapping(&quot;/convert&quot;)<br>    public String convertData(@RequestBody List&lt;Record&gt; records) {<br>        // The service automatically detects uniform structure<br>        String toonData = toonService.convert(records);<br>        <br>        // Now you can send this to LLM with fewer tokens<br>        return sendToLLM(toonData);<br>    }<br>}</pre><p>To use the library in your Spring Boot application, you need to add the annotation:</p><pre>@SpringBootApplication<br>@EnableToonConverter<br>public class Application {<br>    public static void main(String[] args) {<br>        SpringApplication.run(Application.class, args);<br>    }<br>}</pre><h4>Integration with Existing Systems</h4><p>The library is designed to work alongside existing JSON processing. You can gradually migrate endpoints:</p><pre>@Service<br>public class DataService {<br>    public String processForLLM(List&lt;Data&gt; data) {<br>        // Convert only for LLM calls<br>        return toonService.convert(data);<br>    }<br>    <br>    public String processForAPI(List&lt;Data&gt; data) {<br>        // Keep JSON for other APIs<br>        return jsonMapper.writeValueAsString(data);<br>    }<br>}</pre><p>This approach allows testing and gradual adoption without breaking existing functionality.</p><h4>When to Use TOON</h4><p>TOON is most effective for:</p><ul><li>Uniform data structures (records from database)</li><li>Time-series data</li><li>Log entries</li><li>Any tabular data sent to LLMs</li></ul><p>TOON might not be optimal for:</p><ul><li>Deeply nested irregular structures</li><li>Data with high variation between objects</li><li>Systems that don’t use token-based pricing</li><li>Legacy systems that require JSON</li></ul><h4>Conclusion</h4><p>After using this library in production for a while, the results are clear. Token usage decreased significantly, which directly reduced our API costs. The implementation was straightforward and didn’t require major changes to our codebase.</p><p>For developers working with LLMs and structured data, TOON format offers a practical optimization. The Spring Boot library makes adoption simple. You can start with one endpoint and expand usage based on results.</p><p>The key lesson from this project: when new technologies like LLMs introduce new cost models (paying per token), maybe we need to reconsider our data formats. TOON is one solution to this specific problem.</p><p>The library is open source and available on GitHub. Feel free to contribute or report issues.</p><p>#SpringBoot #Java #LLM #OpenAI #TOON #TokenOptimization #AIIntegration</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0775e12c5fe7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Age of Artificial Intelligence: When Speed Becomes a Trap]]></title>
            <link>https://jamalianpour.medium.com/the-age-of-artificial-intelligence-when-speed-becomes-a-trap-5f5c8972cbbe?source=rss-567751062fb9------2</link>
            <guid isPermaLink="false">https://medium.com/p/5f5c8972cbbe</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ai-and-mental-health]]></category>
            <category><![CDATA[genrative-ai]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[society]]></category>
            <dc:creator><![CDATA[Mohammad Jamalianpour]]></dc:creator>
            <pubDate>Sat, 08 Nov 2025 16:21:37 GMT</pubDate>
            <atom:updated>2025-11-08T16:23:40.971Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VrdA496DwlEoXysuY0eZhQ.jpeg" /><figcaption>AI vs ability to wait</figcaption></figure><p>As someone who has been working in software development and artificial intelligence for years, I can confidently say that AI has changed everything. I use AI for almost everything now: writing code, debugging, drafting documentation, researching new frameworks, even brainstorming architecture decisions. It’s an incredible assistant. The productivity gains aren’t just incremental; they’re revolutionary. AI is, without a doubt, one of the most powerful tools I’ve ever wielded.</p><p>And yet, I’m exhausted.</p><p>Not by the work, but by the <em>everything else</em>. A strange, low-grade fatigue has set in, and it’s tied directly to the tool I’ve come to rely on.</p><p>Yet something has been bothering me lately, a feeling that crystallizes every time I open Instagram, LinkedIn, or X. The endless stream of AI-generated content washes over me — perfectly polished posts with that distinctive ChatGPT cadence, AI-generated images with their telltale smoothness, articles that read like they were assembled rather than written. It’s not that any individual piece is necessarily bad. It’s the cumulative weight of it all, It’s like living in an endless stream of artificial creativity and it’s impressive at first, but after a while, it just wears me out. The platforms that were supposed to connect us with human creativity and insight have become showcases for what machines can produce at scale.</p><p>But this is just an annoyance. My real concern runs much deeper.</p><p>Computers and artificial intelligence have undeniably accelerated our lives. But in doing so, I think we’ve lost something essential: <strong>the ability to wait. </strong>Now, when everything is instant, waiting feels like failure.</p><h4>For Example:</h4><p>Let’s say you want to start a new project. You have an idea for a web app.</p><ul><li><strong>The AI “Sprint” (1% to 50%):</strong> You feed the idea to an LLM. In an hour, it gives you the database schema, the API endpoints, the frontend component structure, and all the boilerplate code. You go from a blank folder to a 50% complete project in an afternoon. <strong>It’s a exhilarating rush.</strong></li><li><strong>The Human “Grind” (50% to 60%):</strong> Now, you have to move from 50% to 60%. This part isn’t boilerplate. This is the hard stuff. It’s integrating that one tricky third-party API. It’s refactoring the AI’s naive logic to handle real-world edge cases. It’s the subtle, nuanced work that defines the <em>actual</em> product. This 10% jump might take you one to three days.</li><li><strong>The Reality (60% to 70%):</strong> This next 10% jump getting the authentication perfectly secure, optimizing the database queries, and refining the user experience might take another week.</li></ul><p>The “1-to-50” sprint felt amazing. The “50-to-60” part feels not bad but The “60-to-70” grind feels awful. It feels slow, broken, and frustrating. Because AI has warped our perception of effort, the moment the work gets hard, we are conditioned to believe something is wrong. And here’s the trap, we’ve become so accustomed to that initial velocity, that intoxicating leap from one to fifty, that the normal pace of difficult work now feels unbearably slow.</p><p>So we abandon the project. We stop, convince ourselves it wasn’t the right idea anyway, and start something new. And the cycle repeats.</p><h4>The Vicious Loop of Impatience</h4><p>This is where the real danger lies. When you don’t know how to wait when you haven’t built the mental resilience for the grind you cannot learn new, hard things.</p><p>When we hits the 50% wall they get frustrated. The project suddenly feels “boring” or “too hard.”</p><p>So what do we do?</p><p>We abandon the project.</p><p>We leave it at 50% and jump to a new idea. We get that “1-to-50” rush all over again, feel like a genius for an hour, hit the 50% wall, and abandon it. Again.</p><p>This is a vicious loop. It’s a cycle of hollow victories followed by real failures. You end up with a hard drive full of 50%-finished projects and zero 100%-finished ones. You learn how to start, but you never learn how to finish.</p><h3>In The End</h3><p>I want to be clear: I’m not suggesting we abandon AI tools. That would be neither practical nor desirable. AI has genuinely improved our work and life in countless ways. But I am suggesting that we need to become more conscious and more intentional about what we’re trading away in exchange for speed. We need to teach ourselves and the next generation <strong>not just how to use these powerful tools, but when to step away from them</strong>. We need to cultivate the capacity to sit with difficulty, to find meaning in slow progress, to build the psychological resilience that comes from patient effort over time.</p><p>The irony is that AI was meant to free us from repetitive work so we could focus on meaningful challenges. But if we’re not careful, it will also free us from the patience that makes challenges meaningful in the first place.</p><p>Perhaps the most important skill in the age of AI isn’t learning to prompt effectively or to integrate the latest models into our workflow. Perhaps it’s learning when to deliberately slow down, when to struggle without assistance, when to choose the hard way because the difficulty itself is the point. Because at the end of the day, the goal isn’t to finish projects as quickly as possible. The goal is to become the kind of person who can finish difficult projects at all — and that transformation happens not in the sprint from one to fifty, but in the long, patient journey from fifty to one hundred.</p><p>#ArtificialIntelligence #HumanVsMachine #DigitalExhaustion #CreativityInCrisis #SlowProductivity #TechReflection #MindfulInnovation #AIAndHumanity #AuthenticCreativity #ModernLife</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5f5c8972cbbe" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Flutter Build Directories Are Eating Your SSD: Here’s How to Fight Back]]></title>
            <link>https://medium.com/easy-flutter/flutter-build-directories-are-eating-your-ssd-heres-how-to-fight-back-3e4adf22058b?source=rss-567751062fb9------2</link>
            <guid isPermaLink="false">https://medium.com/p/3e4adf22058b</guid>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[cli]]></category>
            <category><![CDATA[dart]]></category>
            <dc:creator><![CDATA[Mohammad Jamalianpour]]></dc:creator>
            <pubDate>Thu, 06 Mar 2025 12:51:46 GMT</pubDate>
            <atom:updated>2025-03-27T21:24:54.212Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nnxNTKL_sw4YKcIICcmU6A.jpeg" /></figure><p>I’ve found myself working on several projects at once sometimes 4 or 5 different apps in a single week. Flutter has been a revelation for cross platform development. It has, however, one persistent irritation: the ever growing build folders that take up a lot of disk space.</p><p>If you’re anything like me, you’ve had that moment when your disk space warning goes off, and you find that your Flutter projects are the ones to blame, with not just one or two but several gigabytes of cached builds silently piling up over time. In this article, I’ll walk you through the steps of how I solved this problem for myself by creating what I think of as a simple but powerful Dart command-line tool that can automatically clean up Flutter projects in my various development directories.</p><h3>The Problem: Flutter’s Hungry Build Folders</h3><p>The build process of Flutter generates a significant number of intermediate files, compiled code, and assets in the build directory of each project. These files are essential during the development and build stages, but they can be safely cleaned up when not in active use.</p><p>The build directory for a single Flutter project can easily swell to hundreds of megabytes or even several gigabytes, especially when you’re building for multiple platforms. Working on several projects, of course, adds to this number even more. All told, we could be looking at significant disk space usage.</p><p>Flutter gives a simple opportunity to clean a project by offering the flutter clean command, which removes the build directory and frees up the occupied space.</p><p>Yet the need to clean projects becomes apparent when one has many projects scattered throughout their developer folders and attempts to navigate to each folder just to run the clean command.</p><h3>The Solution: A Dart CLI Tool for Automated Cleaning</h3><p><a href="https://github.com/Jamalianpour/f_cleaner">Jamalianpour/f_cleaner</a></p><p>To solve this problem efficiently, I created a Dart CLI application that:</p><ol><li>Scans specified directories (recursively by default)</li><li>Identifies Flutter projects by detecting the presence of a pubspec.yaml file with Flutter dependencies</li><li>Calculates the size of each project’s build directory</li><li>Runs flutter clean on each identified project</li><li>Reports on the total space saved</li></ol><p>This approach allows me to clean all my Flutter projects with a single command, saving both time and disk space.</p><h3>Building the CLI Tool</h3><p>Let’s walk through the key components of this solution. Our Dart CLI application consists of two main files: the main Dart code and a pubspec.yaml file for dependencies.</p><h3>Setting Up the Project</h3><p>First, I created a new Dart project with the necessary dependencies:</p><pre>name: f_cleaner<br>description: A CLI tool to scan directories for Flutter projects and run &#39;flutter clean&#39; to free up disk space.<br>version: 1.0.0<br><br>environment:<br>  sdk: &#39;&gt;=3.0.0 &lt;4.0.0&#39;<br><br>dependencies:<br>  args: ^2.6.0<br>  path: ^1.9.1<br><br>dev_dependencies:<br>  lints: ^2.1.0<br>  test: ^1.24.0<br><br>executables:<br>  f_cleaner: f_cleaner</pre><p>The args package provides command-line argument parsing capabilities, while path helps with cross-platform path manipulation.</p><h3>The Core Functionality</h3><p>The main code is organized into several key functions:</p><ol><li><strong>Argument Parsing</strong>: Handling command-line options for directory selection, recursive scanning, and verbosity</li><li><strong>Flutter Project Detection</strong>: Identifying valid Flutter projects</li><li><strong>Directory Scanning</strong>: Recursively exploring directories</li><li><strong>Size Calculation</strong>: Determining how much space will be freed</li><li><strong>Running Flutter Clean</strong>: Executing the command and handling results</li></ol><p>Here’s a simplified breakdown of the key logic:</p><pre>Future&lt;bool&gt; _isFlutterProject(String dirPath) async {<br>  // Check for pubspec.yaml file<br>  final pubspecFile = File(path.join(dirPath, &#39;pubspec.yaml&#39;));<br>  if (!await pubspecFile.exists()) {<br>    return false;<br>  }<br>  <br>  // Read pubspec.yaml and check for Flutter dependency<br>  try {<br>    final content = await pubspecFile.readAsString();<br>    return content.contains(&#39;flutter:&#39;) || content.contains(&#39;sdk: flutter&#39;);<br>  } catch (_) {<br>    return false;<br>  }<br>}<br><br>Future&lt;int&gt; _calculateDirectorySize(Directory dir) async {<br>  if (!await dir.exists()) {<br>    return 0;<br>  }<br>  <br>  int size = 0;<br>  try {<br>    await for (final entity in dir.list(recursive: true, followLinks: false)) {<br>      if (entity is File) {<br>        size += await entity.length();<br>      }<br>    }<br>  } catch (_) {<br>    // Ignore errors<br>  }<br>  <br>  return size;<br>}<br>Future&lt;ProcessResult&gt; _runFlutterClean(String projectDir, {required bool verbose}) async {<br>  if (verbose) {<br>    print(&#39;Running flutter clean in $projectDir&#39;);<br>  }<br>  <br>  return await Process.run(<br>    &#39;flutter&#39;,<br>    [&#39;clean&#39;],<br>    workingDirectory: projectDir,<br>    runInShell: true,<br>  );<br>}</pre><p>The main scanning function coordinates these operations and collects the results:</p><pre>Future&lt;CleanResults&gt; scanAndCleanFlutterProjects(<br>  String rootDirPath, {<br>  required bool recursive,<br>  required bool verbose,<br>}) async {<br>  final rootDir = Directory(rootDirPath);<br>  if (!await rootDir.exists()) {<br>    throw Exception(&#39;Directory does not exist: $rootDirPath&#39;);<br>  }<br><br>  int projectsFound = 0;<br>  int projectsCleaned = 0;<br>  int spaceFreed = 0;<br>  final futures = &lt;Future&gt;[];<br>  <br>  await for (final entity in _listDirectories(rootDir, recursive: recursive)) {<br>    if (await _isFlutterProject(entity.path)) {<br>      projectsFound++;<br>      <br>      if (verbose) {<br>        print(&#39;Found Flutter project at: ${entity.path}&#39;);<br>      }<br>      <br>      final buildDir = Directory(path.join(entity.path, &#39;build&#39;));<br>      final future = _calculateDirectorySize(buildDir).then((size) async {<br>        if (size &gt; 0) {<br>          try {<br>            final result = await _runFlutterClean(entity.path, verbose: verbose);<br>            if (result.exitCode == 0) {<br>              projectsCleaned++;<br>              spaceFreed += size;<br>              print(&#39;✓ Cleaned: ${entity.path} (freed ${_formatSize(size)})&#39;);<br>            } else {<br>              print(&#39;✗ Failed to clean: ${entity.path}&#39;);<br>              if (verbose) {<br>                print(&#39;  Error: ${result.stderr}&#39;);<br>              }<br>            }<br>          } catch (e) {<br>            print(&#39;✗ Error cleaning: ${entity.path}&#39;);<br>            if (verbose) {<br>              print(&#39;  Error: $e&#39;);<br>            }<br>          }<br>        } else if (verbose) {<br>          print(&#39;• Skipped: ${entity.path} (no build directory or empty)&#39;);<br>        }<br>      });<br>      <br>      futures.add(future);<br>    }<br>  }<br>  <br>  await Future.wait(futures);<br>  <br>  return CleanResults(<br>    projectsFound: projectsFound,<br>    projectsCleaned: projectsCleaned,<br>    spaceFreed: spaceFreed,<br>  );<br>}</pre><p>One key optimization in this design is the use of parallel processing with futures. Instead of cleaning each project sequentially, the tool launches multiple cleaning operations concurrently, making the process much faster, especially when dealing with many projects.</p><h3>Using the Tool</h3><p>With the CLI tool complete, usage is straightforward. After installation, it can be run with various options:</p><pre># Clean all Flutter projects in current directory and subdirectories<br>f_cleaner<br><br># Clean Flutter projects in a specific directory<br>f_cleaner --dir=/path/to/your/flutter/projects<br><br># Non-recursive scan (only check the specified directory)<br>f_cleaner --dir=/path/to/your/flutter/projects --no-recursive<br><br># Dry run (scan and report but don&#39;t clean)<br>f_cleaner --dry-run<br><br># Skip confirmation prompt<br>f_cleaner --no-confirm<br><br># Show detailed output<br>f_cleaner --verbose</pre><p>The tool provides a clear summary after execution:</p><pre>Flutter Projects Cleaner 🧹<br>==========================<br>Scanning directory: /Users/username/development<br>Recursive scan: Yes<br><br>Found 3 Flutter project(s) with build directories:<br>- /Users/username/development/project1 (2.3 GB)<br>- /Users/username/development/project2 (1.8 GB)<br>- /Users/username/development/clients/project3 (3.2 GB)<br><br>Total space that can be freed: 7.3 GB<br>Do you want to proceed with cleaning these projects? [y/N]: y<br><br>✅ Cleaned: /Users/username/development/project1 (freed 2.3 GB)<br>✅ Cleaned: /Users/username/development/project2 (freed 1.8 GB)<br>✅ Cleaned: /Users/username/development/clients/project3 (freed 3.2 GB)<br>❌ Failed to clean: /Users/username/development/broken_project<br><br>Summary<br>-------<br>Flutter projects found: 4<br>Projects cleaned: 3<br>Approximate space freed: 7.3 GB<br>Time taken: 5 seconds</pre><h3>Benefits and Results</h3><p>After implementing this tool in my workflow, I’ve experienced several benefits:</p><ol><li><strong>Significant space savings</strong>: Regularly recovering 10–20GB of disk space</li><li><strong>Time efficiency</strong>: What used to take manual effort now happens automatically</li><li><strong>Better organization</strong>: I no longer avoid cleaning projects due to the hassle</li><li><strong>Development speed</strong>: Less time fighting with disk space warnings means more time coding</li></ol><p>I’ve been running this tool as part of my weekly maintenance routine, and it has become an essential part of my Flutter development workflow.</p><h3>Conclusion</h3><p>As Flutter developers, we often focus on building great apps and overlook infrastructure improvements that can enhance our development experience. This simple CLI tool demonstrates how a small investment in automation can solve persistent annoyances and improve productivity.</p><p>The complete source code for this tool is available in the GitHub repository: <a href="https://github.com/Jamalianpour/f_cleaner">f_cleane</a> (feel free to contribute or customize it for your needs).</p><p>If you’re a Flutter developer managing multiple projects, I encourage you to try this approach or build your own version. The few minutes spent setting up automation will save hours of manual work and gigabytes of disk space in the long run.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3e4adf22058b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/easy-flutter/flutter-build-directories-are-eating-your-ssd-heres-how-to-fight-back-3e4adf22058b">Flutter Build Directories Are Eating Your SSD: Here’s How to Fight Back</a> was originally published in <a href="https://medium.com/easy-flutter">Easy Flutter</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[DevUtils vs. DevToys vs. OpenDev: Which Developer Utility Tool Is Right for You?]]></title>
            <link>https://jamalianpour.medium.com/devutils-vs-devtoys-vs-opendev-which-developer-utility-tool-is-right-for-you-7739ab0adbf0?source=rss-567751062fb9------2</link>
            <guid isPermaLink="false">https://medium.com/p/7739ab0adbf0</guid>
            <category><![CDATA[csharp]]></category>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[swift]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[developer-tools]]></category>
            <dc:creator><![CDATA[Mohammad Jamalianpour]]></dc:creator>
            <pubDate>Mon, 12 Aug 2024 15:08:20 GMT</pubDate>
            <atom:updated>2024-08-12T15:12:03.672Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WmiRsUpe2yzlpJe5_VgN0w.jpeg" /></figure><p>As a developer, having the right set of tools can make a significant difference in productivity. DevUtils, DevToys, and Open Dev are three utility tools that aim to streamline your workflow by offering a wide range of functionalities in one place. But with several options available, which one should you choose?</p><p>In this post, we’ll objectively compare these three tools across various features to help you decide which one suits your development needs best.</p><h3>Overview of the Tools</h3><h3>DevUtils</h3><p><a href="https://devutils.com/">DevUtils</a> is a macOS-exclusive utility designed for developers. It offers a variety of tools that integrate seamlessly with the macOS environment. With its clean and intuitive interface, DevUtils has become a favorite among macOS developers looking for a powerful and easy-to-use toolkit.</p><h3>DevToys</h3><p><a href="https://devtoys.app/">DevToys</a> is a free, open-source utility for Windows users. It’s often referred to as the “Swiss Army knife” for developers, providing a broad range of tools in a single, accessible application. DevToys is particularly appealing due to its simplicity and the wide array of functionalities it offers.</p><h3>OpenDev</h3><p><a href="https://github.com/Jamalianpour/open-dev/">OpenDev</a> is a newer entry, built with Flutter, making it available across multiple platforms — macOS, Windows, Linux, and Web. It combines many of the features found in DevUtils and DevToys while also introducing unique tools and a cross-platform approach. Open Dev is open-source, inviting contributions from the developer community.<br>Try It on web <a href="https://jamalianpour.github.io/open-dev/">OpenDev</a></p><h3>Feature Comparison</h3><p>To help you evaluate these tools, here’s a side-by-side comparison of their features:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/436/1*vwTIeEr98WvcvbpbJFb69g.png" /></figure><h3>Key Insights</h3><h4>1. Platform Compatibility</h4><ul><li><strong>DevUtils</strong> is exclusively available on macOS, making it the ideal choice for developers deeply integrated into the Apple ecosystem.</li><li><strong>DevToys</strong> is designed specifically for Windows, offering a tailored experience for users within the Microsoft environment.</li><li><strong>Open Dev</strong> is the most versatile, with support for macOS, Windows, and Linux, catering to developers who work across multiple operating systems.</li></ul><h4>2. Unique Features</h4><ul><li><strong>SQL Formatter/Parser:</strong> DevToys offers a SQL formatter/parser, which can be a handy tool for developers working with databases.</li><li><strong>Cron Parser:</strong> Only Open Dev offers a cron parser, which can be particularly useful for developers dealing with cron jobs.</li><li><strong>README Helper:</strong> Open Dev also includes a README helper with a real-time viewer, which is a handy feature for developers working on documentation.</li><li><strong>Markdown to HTML Converter:</strong> Both DevUtils and DevToys provide a converter from Markdown to HTML, which is not currently available in Open Dev.</li></ul><h4>3. Core Utilities</h4><ul><li>All three tools offer essential developer utilities such as JSON parsing, Unix time conversion, JWT debugging, and color conversion.</li><li>DevToys stands out with additional tools like the SQL Formatter and String Utilities, while Open Dev brings unique features like the Cron Parser and Developer News.</li></ul><h3>Strengths and Weaknesses</h3><h3>DevUtils</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*F2pESl5Qhl7tsF-F.png" /></figure><p><strong>Strengths:</strong></p><ul><li>Seamless integration with macOS.</li><li>Clean, intuitive interface.</li><li>Powerful core utilities for everyday developer tasks.</li><li>Includes tools like Markdown to HTML and HTML Formatter.</li></ul><p><strong>Weaknesses:</strong></p><ul><li>Limited to macOS, which excludes users on other platforms.</li><li>Lacks some additional tools like a cron parser or UUID generator.</li><li>Need license (Start from 40$ to 80$)</li></ul><h3>DevToys</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*JisJGwABo7XQ06QU.png" /></figure><p><strong>Strengths:</strong></p><ul><li>Free and open-source.</li><li>Wide range of tools in a single application.</li><li>Simple and user-friendly interface for Windows users.</li><li>Includes unique tools like SQL Formatter and String Utilities.</li></ul><p><strong>Weaknesses:</strong></p><ul><li>Not stable on MacOS and Linux</li><li>Lacks some unique features like a cron parser or README helper.</li></ul><h3>Open Dev</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*VPUz_sHGcR9XPgG7.png" /></figure><p><strong>Strengths:</strong></p><ul><li>Cross-platform support (macOS, Windows, Linux, web).</li><li>Open-source with potential for community-driven development.</li><li>Includes unique tools like the Cron Parser and README Helper.</li></ul><p><strong>Weaknesses:</strong></p><ul><li>Newer to the market, so it may not have the same level of polish as DevUtils.</li><li>Does not include some features available in other tools, like SQL formatting or Markdown to HTML conversion.</li></ul><h3>Conclusion</h3><p>Each of these tools has its strengths, depending on your specific needs and the platform you use:</p><ul><li><strong>DevUtils</strong> is perfect for macOS developers who want a highly polished, native experience.</li><li><strong>DevToys</strong> is ideal for Windows users looking for a free, comprehensive utility tool, especially with its SQL Formatter and other string utilities.</li><li><strong>Open Dev</strong> is a great option for developers who need a cross-platform toolset with some unique features not found in the other two.</li></ul><p>Ultimately, the best choice depends on your operating system, the features you need, and your preference for open-source versus proprietary tools. Explore each tool to see which one fits your workflow the best.</p><p>Try Open Dev on <a href="https://github.com/Jamalianpour/open-dev/">GitHub</a>, or check out DevUtils and DevToys to see how they can enhance your development environment.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7739ab0adbf0" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A beautiful, easy to use and customizable time planner for Flutter]]></title>
            <link>https://jamalianpour.medium.com/a-beautiful-easy-to-use-and-customizable-time-planner-for-flutter-7143da51bad4?source=rss-567751062fb9------2</link>
            <guid isPermaLink="false">https://medium.com/p/7143da51bad4</guid>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[calendar]]></category>
            <category><![CDATA[dart]]></category>
            <category><![CDATA[planner]]></category>
            <category><![CDATA[time]]></category>
            <dc:creator><![CDATA[Mohammad Jamalianpour]]></dc:creator>
            <pubDate>Sun, 28 Mar 2021 17:21:36 GMT</pubDate>
            <atom:updated>2024-08-09T12:41:10.438Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aUnA2P9W52sjg3lDHEgJ2A.jpeg" /><figcaption>Flutter Time Planner</figcaption></figure><p>A few months ago in one of my projects with flutter I need to show some reservations and tasks to user, I decided to show them on a calendar because it’s easy for user to review all of them quickly.</p><p>I start to googling but I can’t found any good package for this. Honestly I found one package but it didn’t allow you to customize that and change the date.</p><p>So I create my own time table for my project and use that on it, and now I decided to publish it as open-source and pub package, now you can use it for free.</p><p><a href="https://pub.dev/packages/time_planner">time_planner | Flutter Package</a></p><p>This is a widget for show tasks to user on a time table.<br>Each row show an hour and each column show a day but you can change the title of column and show any things else you want.<br>This package will support flutter mobile 📱, desktop 🖥 and web 🌐</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/590/1*eAxI2yJ3Q3yvr-CzqYpfBg.gif" /></figure><p>You can see and test web demo here: <a href="https://jamalianpour.github.io/time_planner_demo">https://jamalianpour.github.io/time_planner_demo</a></p><h3><strong>Usage:</strong></h3><p><strong>step 1:</strong> Add this dependence to pubspec.yaml file<em> :</em></p><pre>dependencies:<br>  time_planner: ^0.1.2+1</pre><p>and then run flutter pub get to download needed <em>dependencies.</em></p><p><strong>step 2: </strong>Import time planner lib to your file:</p><pre>import &#39;package:time_planner/time_planner.dart&#39;;</pre><p>now you can use TimePlanner as a widget in your code.</p><p><strong>step 3: </strong>Add TimePlanner under the a widget like scaffold or any other widget.</p><pre>TimePlanner(<br>  // time will be start at this hour on table<br>  startHour: 6,<br>  // time will be end at this hour on table<br>  endHour: 24,<br>  // each header is a column and a day<br>  headers: [<br>    TimePlannerTitle(<br>      date: &quot;3/10/2021&quot;,<br>      title: &quot;sunday&quot;,<br>    ),<br>    TimePlannerTitle(<br>      date: &quot;3/11/2021&quot;,<br>      title: &quot;monday&quot;,<br>    ),<br>    TimePlannerTitle(<br>      date: &quot;3/12/2021&quot;,<br>      title: &quot;tuesday&quot;,<br>    ),<br>  ],<br>  // List of task will be show on the time planner<br>  tasks: tasks,<br>),</pre><p><strong>Time planner widget get 3 required arguments:</strong></p><ul><li><strong><em>startHour</em> </strong>: Time will be start at this hour on table minimum value is 1.</li><li><strong><em>endHour</em> </strong>: Time will be end at this hour on table maximum value is 24.</li><li><strong><em>headers</em> </strong>: At first you should know each header is a column and each column is a day. headers is a list of TimePlannerTitle and this will get date and title as string.</li></ul><p><strong>Time planner widget get 3 optional arguments:</strong></p><ul><li><strong><em>tasks</em> </strong>: List of TimePlannerTask that’s will be show on the time planner as tasks. for more detail about TimePlannerTask read below.</li><li><strong><em>currentTimeAnimation </em></strong>: when time planner widget loaded it will be scroll to current local hour and this feature is true by default.</li><li><strong><em>style</em> </strong>: you can change style of time planner with TimePlannerStyle . for more detail about TimePlannerStyle read below.</li></ul><h4>Style:</h4><p>You can customize the style of time planner with TimePlannerStyle:</p><pre>style: TimePlannerStyle(<br>  backgroundColor: Colors.blueGrey[700],<br>  // default value for height is 80<br>  cellHeight: 60,<br>  // default value for width is 90<br>  cellWidth: 60,<br>  dividerColor: Colors.white,<br>  showScrollBar: true,<br>),</pre><h4>Tasks:</h4><p>Now if you want to add task on time planner you need to use TimePlannerTask :</p><pre>List&lt;TimePlannerTask&gt; tasks = [<br>  TimePlannerTask(<br>    // background color for task<br>    color: Colors.purple,<br>    // day: Index of header, hour: Task will be begin at this hour<br>    // minutes: Task will be begin at this minutes<br>    dateTime: TimePlannerDateTime(day: 0, hour: 14, minutes: 30),<br>    // Minutes duration of task<br>    minutesDuration: 90,<br>    // Days duration of task (use for multi days task)<br>    daysDuration: 1,<br>    onTap: () {},<br>    child: Text(<br>      &#39;this is a task&#39;,<br>      style: TextStyle(color: Colors.grey[350], fontSize: 12),<br>    ),<br>  ),<br>];</pre><p>Every task on time planner is clickable and you can set your own function on the task with onTap .</p><h3>Full example:</h3><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/20a7d99a0f94584b15209a5744599d85/href">https://medium.com/media/20a7d99a0f94584b15209a5744599d85/href</a></iframe><p>Thanks for your attention and fill free to fork this repository and send pull request 🏁👍</p><ul><li><a href="https://github.com/Jamalianpour/time_planner">Jamalianpour/time_planner</a></li><li><a href="https://pub.dev/packages/time_planner">time_planner | Flutter Package</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7143da51bad4" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>