<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Puranam Pradeep Picasso - ImbueDesk Profile on Medium]]></title>
        <description><![CDATA[Stories by Puranam Pradeep Picasso - ImbueDesk Profile on Medium]]></description>
        <link>https://medium.com/@imbuedeskpicasso?source=rss-f3467d786018------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 14 Apr 2026 03:23:51 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@imbuedeskpicasso/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[A Deep Dive Into The World’s First Stablecoin That Pays You To Learn While Protecting Against…]]></title>
            <link>https://imbuedeskpicasso.medium.com/a-deep-dive-into-the-worlds-first-stablecoin-that-pays-you-to-learn-while-protecting-against-f80d0d6e6a85?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/f80d0d6e6a85</guid>
            <category><![CDATA[cryptocurrency-investment]]></category>
            <category><![CDATA[universal-basic-income]]></category>
            <category><![CDATA[cryptocurrency-news]]></category>
            <category><![CDATA[cryptocurrency]]></category>
            <category><![CDATA[stable-coin]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Wed, 17 Sep 2025 17:14:10 GMT</pubDate>
            <atom:updated>2025-09-17T17:14:10.173Z</atom:updated>
            <content:encoded><![CDATA[<h3><em>A Deep Dive Into The World’s First Stablecoin That Pays You To Learn While Protecting Against Inflation</em></h3><blockquote>The Knowledge Economy Revolution: How ASBC is Redefining Money for the Digital Age.</blockquote><p>In a world where artificial intelligence threatens traditional jobs and inflation erodes purchasing power, a revolutionary new approach to digital currency is emerging. The Anant Stable Base Coin (ASBC) isn’t just another cryptocurrency — it’s a paradigm shift that transforms how we think about money, work, and economic participation in the 21st century. <strong><em>I have included complete Whitepaper link at the end of this article</em></strong></p><p><strong><em>The Anant Stable Base Coin (ASBC) protocol (Model D)</em></strong><em> introduces a revolutionary approach to digital currency stability through a </em><strong><em>fixed 1.2× USD genesis peg, multi-asset reserves, and human knowledge-based<br>mining</em></strong><em>. Unlike traditional stablecoins that suffer from centralization risks, algorithmic failures, or opaque reserves, ASBC operates on its own Layer-1 blockchain with fully automated, code-enforced policies. The Model D design establishes a permanent $1.20 USD peg anchored to gold’s price at launch, maintains</em><strong><em> a diversified reserve of government T-bills, inflation-indexed bonds, gold bonds, and stablecoins (with gold failover mechanisms)</em></strong><em>, and creates sustainable </em><strong><em>Universal Basic Income </em></strong><em>via </em><strong><em>Proof of True Human Knowledge (PoTHK)</em></strong><em> consensus mining. All operations — from fee collection and reserve rebalancing to compliance checks — are governed by transparent smart contracts, eliminating human discretion and ensuring regulatory compliance through built-in zkKYC and automated dispute resolution.</em></p><figure><img alt="Anant Stable Base Coin — ASBC helps you Earn from Learning, a new Universal Basic Income Incentive Concept Through Cryptocurrency" src="https://cdn-images-1.medium.com/max/1024/0*fE-1mA3-P8VR_3Za.png" /><figcaption>Anant Stable Base Coin — ASBC helps you Earn from Learning, a new Universal Basic Income Incentive Concept Through Cryptocurrency</figcaption></figure><h3>Beyond Traditional Stablecoins:</h3><h4>A New Economic Model</h4><p>While existing stablecoins like USDT and USDC simply peg to the US dollar, ASBC takes a radically different approach. With its innovative 1.2x USD peg anchored to gold prices at genesis, ASBC provides a <strong>20% inflation buffer</strong> that protects holders’ purchasing power over time.</p><p>But the real innovation lies in how new ASBC enters circulation. Instead of relying on centralized issuers or energy-intensive mining, ASBC introduces <strong>Proof of True Human Knowledge (PoTHK)</strong> — a consensus mechanism that rewards human intelligence and learning.</p><blockquote>“We’re not just creating stable money — we’re creating stable income for anyone willing to contribute their knowledge to the world.” — ASBC Development Team</blockquote><h3>The UBI Revolution:</h3><h4>From Welfare to Workforce</h4><p>Traditional Universal Basic Income proposals face a fundamental challenge: funding. Governments can’t print money indefinitely without triggering inflation, and taxation-based UBI faces political resistance. ASBC solves this through <strong>knowledge-based value creation</strong>.</p><h4>How Knowledge Mining Works</h4><p>Every day, thousands of people worldwide can participate in ASBC’s knowledge mining system:</p><ul><li><strong>Complete educational questionnaires</strong> tailored to your expertise</li><li><strong>Earn 0.06–0.13 ASBC</strong> per correct answer based on your tier level</li><li><strong>Progress through eight tiers</strong> as you demonstrate consistent knowledge contribution</li><li><strong>Access stable income</strong> that grows with the network’s success</li></ul><p>This isn’t charity — it’s a new form of productive work where human intelligence becomes a valuable economic resource.</p><h3>Multi-Asset Stability:</h3><h4>Learning from Past Failures</h4><p>The spectacular collapse of Terra-USD in 2022, which wiped out $50 billion in value overnight, demonstrated the fatal flaws of algorithmic stablecoins. ASBC’s approach is fundamentally different:</p><h4>Diversified Reserve Structure</h4><ul><li><strong>30% Government T-Bills</strong> from top 10 GDP nations</li><li><strong>30% Inflation-Indexed Bonds</strong> that grow with inflation</li><li><strong>20% Gold Bonds</strong> providing crisis-resistant value</li><li><strong>20% Diversified Stablecoin Basket</strong> with automatic failover to gold</li></ul><p>This diversification means that even if one asset class fails, others compensate. Historical simulations show that ASBC maintains stability even during global financial crises.</p><h4>Compliance Without Compromise</h4><p>One of ASBC’s most innovative features is its <strong>compliance-by-design</strong> architecture. Through zero-knowledge KYC (zkKYC) and automated transaction monitoring, ASBC achieves regulatory compliance without sacrificing user privacy or decentralization.</p><h4>Smart Compliance Features</h4><ul><li><strong>zkKYC verification</strong> proves identity without exposing personal data</li><li><strong>Automated greylisting</strong> flags suspicious transactions for review</li><li><strong>Real-time reporting</strong> provides regulators with unprecedented transparency</li><li><strong>Multi-jurisdictional design</strong> accommodates global regulatory frameworks</li></ul><p>This approach positions ASBC as the first major stablecoin that regulators can embrace rather than fear.</p><h3>The Social Impact Multiplier</h3><p>ASBC’s design creates positive feedback loops that benefit society:</p><h4>Educational Incentives</h4><ul><li>Miners must learn to earn, creating global education incentives</li><li>Knowledge-based questions can cover everything from basic literacy to advanced technical skills</li><li>Communities can sponsor specialized question sets for local development needs</li></ul><h4>Economic Inclusion</h4><ul><li>Anyone with internet access can participate regardless of capital</li><li>Stable earning potential helps people in developing economies</li><li>No barriers based on nationality, credit history, or banking access</li></ul><h4>Network Effects</h4><ul><li>More users create more transaction fees</li><li>Higher fees fund larger UBI distributions</li><li>Growing UBI attracts more participants</li><li>Expanding network provides more stability</li></ul><h3>Real-World Impact:</h3><h4>Early Indicators</h4><p>While ASBC is still in development, early modeling suggests transformative potential:</p><ul><li><strong>Potential earnings</strong>: Dedicated miners could earn $400–800 monthly in early tiers</li><li><strong>Global reach</strong>: System designed to serve millions of participants worldwide</li><li><strong>Stability testing</strong>: Simulations show peg maintenance even during 20% market crashes</li><li><strong>Reserve growth</strong>: Fee-funded model creates self-reinforcing stability</li></ul><h3>The Technology Behind Trust</h3><p>ASBC operates on its own Layer-1 blockchain, purpose-built for stablecoin operations:</p><h4>Technical Innovations</h4><ul><li><strong>3-second block times</strong> for rapid transactions</li><li><strong>5,000+ TPS capacity</strong> with horizontal scaling planned</li><li><strong>EVM compatibility</strong> for easy developer adoption</li><li><strong>Automated reserve rebalancing</strong> maintains optimal asset allocation</li></ul><h4>Security First</h4><ul><li><strong>Multi-signature governance</strong> prevents single points of failure</li><li><strong>Time-locked upgrades</strong> provide transparency and review periods</li><li><strong>Circuit breakers</strong> halt operations during extreme market stress</li><li><strong>Formal verification</strong> of critical smart contracts</li></ul><h4>Addressing the Critics</h4><p>No innovative financial system is without skeptics. Common concerns include:</p><p><strong>“Sounds too good to be true”</strong>: Unlike Ponzi schemes, ASBC’s UBI is funded by actual economic activity (transaction fees), not new investor money.</p><p><strong>“Regulatory resistance”</strong>: ASBC’s proactive compliance design specifically addresses regulatory concerns while maintaining decentralization.</p><p><strong>“Market volatility”</strong>: The multi-asset reserve structure and 120% over-collateralization provide unprecedented stability buffers.</p><p><strong>“Scalability questions”</strong>: The knowledge-based mining system naturally scales with economic activity and can accommodate millions of participants.</p><h3>Looking Forward: The Path to Adoption</h3><p>ASBC’s roadmap includes:</p><h4>Near-term (6–12 months)</h4><ul><li>Main-net launch with core stability features</li><li>Initial mining community development</li><li>Regulatory sandbox participation</li><li>Exchange integrations</li></ul><h4>Medium-term (1–2 years)</h4><ul><li>Global regulatory approvals</li><li>Institutional adoption</li><li>Cross-border payment integrations</li><li>Developer ecosystem growth</li></ul><h4>Long-term (2+ years)</h4><ul><li>Central bank collaborations</li><li>International monetary system integration</li><li>Advanced privacy features</li><li>Quantum-resistant upgrades</li></ul><h3>The Bigger Picture: Reimagining Economic Systems</h3><p>ASBC represents more than technological innovation — it’s a philosophical statement about what money should do. Instead of concentrating wealth among capital owners, ASBC distributes value to knowledge contributors. Instead of creating artificial scarcity, it creates abundant opportunities for participation.</p><p>As we face an uncertain economic future marked by:</p><ul><li>Rising inflation and currency debasement</li><li>AI-driven job displacement</li><li>Growing global inequality</li><li>Climate concerns about energy-intensive mining</li></ul><p>ASBC offers a different path — one where technology serves humanity rather than extracting from it.</p><h3>Conclusion: The Knowledge Economy Awaits</h3><p>The transition from industrial to information economy requires new forms of money optimized for knowledge work. ASBC provides exactly that: a stable, inclusive, and socially beneficial currency that rewards human intelligence while maintaining the security and transparency that digital money requires.</p><p>For the first time in monetary history, we have the technology to create money that is simultaneously:</p><ul><li><strong>Stable</strong> through diversified reserves and algorithmic management</li><li><strong>Inclusive</strong> through knowledge-based participation</li><li><strong>Transparent</strong> through blockchain technology</li><li><strong>Compliant</strong> through automated regulatory features</li><li><strong>Sustainable</strong> through human rather than computational mining</li></ul><p>The knowledge economy revolution has begun. ASBC is leading the way.</p><p><em>Interested in learning more about ASBC or participating in the knowledge mining system? Ready to be part of the stablecoin revolution? Join the ASBC community and help build the future of money.</em></p><p><strong>Whitepaper Reference (This is Chat-GPT generated PDF)</strong>: <a href="https://drive.google.com/file/d/1B7vrf8mGXGfL5pGaeeC1nOMMQ5z_RW0x/view?usp=drive_link">ASBC Model D whitepaper</a></p><p><strong><em>Disclaimer:</em></strong><em> This article is for educational purposes. Cryptocurrency investments carry risk. ASBC is in development and regulatory approval is pending in various jurisdictions.</em></p><p>Thank you, Readers.</p><blockquote>I hope you have found this article to be informative and helpful. As a creator, I am dedicated to providing valuable insights and <strong>analysis on cryptocurrency, stock market, blockchain, AI/ML field and other technologies</strong>.</blockquote><blockquote>If you have enjoyed this article and would like to support my ongoing efforts, I would be honored to have you as a member of my Patreon community <strong>(I usually share Algorithmic codes for trading on cryptocurrency and other different asset classes)</strong>. As a member, you will have access to exclusive content, early access to new analysis, and the opportunity to be a part of shaping the direction of my research.</blockquote><blockquote><strong>Membership starts at just $4</strong>, and you can choose to contribute on a monthly basis. Your support will <strong>help me to continue to produce high-quality content</strong> and bring you the latest insights on latest technologies.</blockquote><blockquote><strong>Patreon </strong>— <a href="https://patreon.com/pppicasso">https://patreon.com/pppicasso</a></blockquote><blockquote><em>Show some love to this project! Toss a coin to our wallet: 🪙🚀<br>USDT of my Crypto Wallet account Address —</em></blockquote><blockquote><em>0x6b7d9c7537ba0cc1494429b8dc21a23f75aa826d (BEP20 BSC, ERC20 Ethereum chain, Polygon POS (ends with contract address — 58e8f) Smart chain)</em></blockquote><blockquote><em>TLmtw1r1emN4M39LgUArp7xGKmj6Mmd5cs (Tron chain TRC20 for USDT)</em></blockquote><blockquote><em>8PEi2c1oiVNeQqRZLw3rUaN3oCtZ28o4nkCiuLgxoiiA (Solana Chain for USDT)</em></blockquote><blockquote><em>For Indian Viewers who like to support can gpay or UPI — pichupicasso@oksbi</em></blockquote><p>Regards,</p><p><strong>Puranam Pradeep Picasso Sharma</strong></p><p><strong><em>Linkedin — </em></strong><a href="https://www.linkedin.com/in/puranampradeeppicasso/"><strong><em>https://www.linkedin.com/in/puranampradeeppicasso/</em></strong></a></p><p><strong><em>Patreon — </em></strong><a href="https://patreon.com/pppicasso"><strong><em>https://patreon.com/pppicasso</em></strong></a></p><p><strong><em>Facebook — </em></strong><a href="https://www.facebook.com/puranam.p.picasso/"><strong><em>https://www.facebook.com/puranam.p.picasso/</em></strong></a></p><p><strong><em>Twitter — </em></strong><a href="https://twitter.com/picasso_999"><strong><em>https://twitter.com/picasso_999</em></strong></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f80d0d6e6a85" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why Every Stablecoin Before ASBC Was Broken (And How We Finally Fixed It)]]></title>
            <link>https://imbuedeskpicasso.medium.com/why-every-stablecoin-before-asbc-was-broken-and-how-we-finally-fixed-it-caf75b4280a0?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/caf75b4280a0</guid>
            <category><![CDATA[stable-coin]]></category>
            <category><![CDATA[cryptocurrency-investment]]></category>
            <category><![CDATA[cryptocurrency]]></category>
            <category><![CDATA[universal-basic-income]]></category>
            <category><![CDATA[consensus]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Wed, 17 Sep 2025 13:19:19 GMT</pubDate>
            <atom:updated>2025-09-17T13:21:00.773Z</atom:updated>
            <content:encoded><![CDATA[<p><em>An in-depth analysis of stablecoin failures and the revolutionary solution that changes everything. </em><strong><em>(We included complete Whitepaper link at the end of this article)</em></strong></p><figure><img alt="ASBC (Anant Stable Base Coin) vs Other Stablecoins" src="https://cdn-images-1.medium.com/max/1024/0*c-lijlC1K8MjGrAD" /><figcaption>Different stablecoins in market — reference: google search</figcaption></figure><p>The stablecoin market has seen spectacular failures, from the $50 billion Terra Luna collapse to ongoing concerns about Tether’s reserves. These failures share common design flaws that ASBC’s innovative architecture finally addresses. Here’s why ASBC represents the evolution of digital currency beyond the limitations of its predecessors.</p><blockquote>The Anant Stable Base Coin (ASBC) protocol (Model D) introduces a revolutionary approach to digital currency stability through a fixed 1.2× USD genesis peg, multi-asset reserves, and human knowledge-based<br>mining. Unlike traditional stablecoins that suffer from centralization risks, algorithmic failures, or opaque reserves, ASBC operates on its own Layer-1 blockchain with fully automated, code-enforced policies. The Model D design establishes a permanent $1.20 USD peg anchored to gold’s price at launch, maintains a diversified reserve of government T-bills, inflation-indexed bonds, gold bonds, and stablecoins (with gold failover mechanisms), and creates sustainable Universal Basic Income via Proof of True Human Knowledge (PoTHK) consensus mining. All operations — from fee collection and reserve rebalancing to compliance checks — are governed by transparent smart contracts, eliminating human discretion and ensuring regulatory compliance through built-in zkKYC and automated dispute resolution.</blockquote><figure><img alt="ASBC (Anant Stable Base Coin) vs Other Stablecoins" src="https://cdn-images-1.medium.com/max/1024/1*868ORsiECEcYSM8yO6rWRg.png" /><figcaption>ASBC (Anant Stable Base Coin) vs Other Stablecoins</figcaption></figure><h3>The Three Fatal Flaws of Traditional Stablecoins</h3><h4>Flaw #1: Centralization Risks (USDT, USDC)</h4><p><strong>The Problem</strong>: Centralized stablecoins require blind trust in corporate reserves and face single points of failure. Users can’t verify backing assets, and regulatory actions can freeze entire systems overnight.</p><p><strong>ASBC’s Solution</strong>: Multi-asset reserves with real-time on-chain verification. Every ASBC is backed by a diversified portfolio (T-bills, inflation-indexed bonds, gold bonds, stablecoins) with 120% over-collateralization that anyone can audit.</p><h4>Flaw #2: Death Spiral Mechanisms (Terra Luna, Iron Finance)</h4><p><strong>The Problem</strong>: Algorithmic stablecoins rely on market confidence. When that confidence breaks, mint-and-burn mechanisms create devastating feedback loops that destroy value.</p><p><strong>ASBC’s Solution</strong>: Real asset backing prevents death spirals. Even in extreme stress, ASBC maintains value through tangible reserves, not market psychology.</p><h4>Flaw #3: Capital Inefficiency (DAI, MakerDAO)</h4><p><strong>The Problem</strong>: Crypto-collateralized stablecoins require 150%+ over-collateralization of volatile assets, creating inefficient capital usage and liquidation risks.</p><p><strong>ASBC’s Solution</strong>: Stable, yield-generating reserves (government bonds, inflation-indexed securities) provide efficient backing without liquidation risk.</p><h3>The Innovation That Changes Everything</h3><p>ASBC introduces <strong>Proof of True Human Knowledge (PoTHK)</strong>, a consensus mechanism that creates the first sustainable Universal Basic Income built into a stablecoin’s design.</p><h3>How PoTHK Works</h3><ul><li>Global participants answer skill-based questionnaires</li><li>Correct answers earn ASBC rewards (0.06–0.13 per question)</li><li>Eight-tier progression system rewards long-term participation</li><li>Earnings funded by transaction fees, not inflation</li></ul><h4>Why This Matters</h4><p>Unlike traditional mining that wastes energy or algorithmic minting that creates inflation, PoTHK creates value through human knowledge contribution. This makes ASBC the first stablecoin that pays users while maintaining stability.</p><h3>Beyond 1:1 Pegs: The Inflation-Proof Design</h3><p>While every major stablecoin maintains a 1:1 USD peg, ASBC’s 1.2× USD peg (anchored to gold at genesis) provides a built-in inflation hedge.</p><h4>The 20% Inflation Buffer</h4><ul><li>If USD inflates 20%, traditional stablecoins lose 20% purchasing power</li><li>ASBC maintains value through its gold-anchored reference point</li><li>Over-collateralized reserves grow with inflation-indexed bonds</li><li>Gold allocation appreciates during currency debasement</li></ul><p>This design makes ASBC the first stablecoin that becomes more attractive during inflationary periods rather than less.</p><h3>Reserve Architecture:</h3><h4>Learning from Every Failure</h4><p>ASBC’s reserve design specifically addresses every historical stablecoin failure:</p><h4>Against Bank Runs (Iron Finance lesson)</h4><p><strong>Multi-asset backing</strong>: Even if 50% of reserves face problems, remaining assets maintain full backing<br><strong>Automated rebalancing</strong>: Daily algorithmic adjustments maintain target allocations<br><strong>Emergency conversion</strong>: Failed assets automatically convert to gold</p><h4>Against Regulatory Seizure (USDC frozen accounts)</h4><p><strong>Geographic diversification</strong>: Reserves spread across multiple jurisdictions<br><strong>Decentralized custody</strong>: No single entity controls all reserve assets<br><strong>Governance protection</strong>: Changes require community consensus, not corporate decisions</p><h4>Against Market Manipulation (Tether FUD cycles)</h4><p><strong>Real-time transparency</strong>: Continuous on-chain proof of reserves<br><strong>Independent audits</strong>: Multi-party verification of off-chain assets<br><strong>Stress testing</strong>: Public simulations of extreme scenarios</p><h3>The Compliance Revolution</h3><p>Traditional stablecoins face an impossible choice: remain decentralized and face regulatory crackdowns, or become compliant and sacrifice user control. ASBC solves this through <strong>compliance-by-design</strong>.</p><h4>Zero-Knowledge KYC (zkKYC)</h4><ul><li>Users prove identity without exposing personal data</li><li>Regulatory compliance without privacy compromise</li><li>Sybil attack protection maintains system integrity</li></ul><h4>Automated Monitoring</h4><ul><li>Smart contracts flag suspicious transactions</li><li>Real-time suspicious activity reporting (SAR) generation</li><li>Transparent processes without human discretion</li></ul><h4>Multi-Jurisdictional Design</h4><ul><li>Built-in compliance with major regulatory frameworks (US, EU, Asia-Pacific)</li><li>Adaptable architecture for evolving regulations</li><li>Proactive engagement with regulatory bodies</li></ul><h3>Economic Modeling:</h3><h4>Stress Testing the Future</h4><p>ASBC has undergone extensive simulation testing across multiple scenarios: Detailed explanation is given inisde whitepaper link shared below.</p><h4>Bull Market Scenario</h4><ul><li>Perfect peg maintenance with &lt;0.2% deviation</li><li>Reserve ratio growth from 120% to 140%</li><li>Holder APY increasing from 3% to 5%</li><li>Sustainable UBI scaling with network growth</li></ul><h4>Bear Market Scenario</h4><ul><li>Temporary peg deviation to $1.18 (quickly restored)</li><li>Reserve ratio maintained above 118%</li><li>Automatic cost controls preserve system health</li><li>Loyal holders continued earning yields</li></ul><h4>Crisis Scenario (Multi-asset failure)</h4><ul><li>Worst-case peg deviation to $1.12 during peak panic</li><li>Reserve ratio bottomed at 102% (still over-collateralized)</li><li>Emergency measures prevented death spiral</li><li>Full recovery within weeks of crisis resolution</li></ul><h3>The Social Impact Multiplier</h3><p>Unlike traditional stablecoins that extract value for corporate profits, ASBC creates positive social externalities:</p><h4>Educational Incentives</h4><p>Knowledge mining requires learning, creating global education incentives that scale with network size.</p><h4>Economic Inclusion</h4><p>Anyone with internet access can earn stable income regardless of capital, location, or banking access.</p><h4>Network Effects</h4><p>Growing participation strengthens the entire system through increased transaction fees, larger UBI distributions, and greater stability.</p><h3>Technical Architecture:</h3><h4>Built for Scale</h4><p>ASBC operates on purpose-built Layer-1 infrastructure optimized for stablecoin operations:</p><h4>Performance Specifications</h4><ul><li>3-second block times for rapid transactions</li><li>5,000+ TPS with horizontal scaling capability</li><li>EVM compatibility for developer adoption</li><li>Native cross-chain interoperability</li></ul><h4>Security Features</h4><ul><li>Multi-signature governance with timelock delays</li><li>Formal verification of critical smart contracts</li><li>Automated circuit breakers for extreme conditions</li><li>Hardware security module (HSM) key storage</li></ul><h4>Operational Excellence</h4><ul><li>Automated reserve rebalancing algorithms</li><li>Real-time risk monitoring and alerts</li><li>Emergency pause mechanisms with transparent recovery</li><li>Comprehensive audit trails for all operations</li></ul><h3>Roadmap to Global Adoption</h3><h4>Phase 1: Foundation (Current)</h4><ul><li>Core protocol development and auditing</li><li>Regulatory framework establishment</li><li>Community building and education</li><li>Strategic partnership development</li></ul><h4>Phase 2: Launch (Next 12 months)</h4><ul><li>Mainnet deployment with full features</li><li>Exchange integrations and liquidity provision</li><li>Knowledge mining community activation</li><li>Multi-jurisdictional regulatory approvals</li></ul><h4>Phase 3: Scale (12–24 months)</h4><ul><li>Institutional adoption and treasury use cases</li><li>Cross-border payment integration</li><li>Developer ecosystem expansion</li><li>Central bank digital currency compatibility</li></ul><h3>Why ASBC Succeeds Where Others Failed</h3><p>The key differentiators that make ASBC the first truly sustainable stablecoin:</p><h4>Real Value Creation</h4><p>PoTHK mining creates genuine value through knowledge contribution, funding UBI sustainably without inflation or speculation.</p><h4>Multi-Asset Resilience</h4><p>Diversified reserves across uncorrelated asset classes prevent single points of failure that destroyed previous projects.</p><h4>Inflation Protection</h4><p>The 1.2× USD peg provides built-in protection against currency debasement that threatens traditional stablecoins.</p><h4>Automated Compliance</h4><p>Code-enforced regulatory compliance enables global adoption without sacrificing decentralization.</p><h4>Community Alignment</h4><p>Token distribution favors participants and contributors rather than early investors, creating sustainable network effects.</p><h3>The Path Forward</h3><p>ASBC represents more than incremental improvement — it’s a fundamental reimagining of what digital money can be. By combining:</p><ul><li><strong>Rock-solid stability</strong> through diversified reserves</li><li><strong>Social benefit</strong> through Universal Basic Income</li><li><strong>Inflation protection</strong> through intelligent peg design</li><li><strong>Global accessibility</strong> through knowledge-based participation</li><li><strong>Regulatory compliance</strong> through automated governance</li></ul><p>ASBC creates the first stablecoin that serves humanity rather than extracting from it.</p><p>The age of broken stablecoins is ending. The era of truly stable, socially beneficial digital currency has begun.</p><p><em>Ready to be part of the stablecoin revolution? Join the ASBC community and help build the future of money.</em></p><p><strong>Whitepaper Reference (This is ChatGPT generated PDF)</strong>: <a href="https://drive.google.com/file/d/1B7vrf8mGXGfL5pGaeeC1nOMMQ5z_RW0x/view?usp=drive_link">ASBC Model D whitepaper</a></p><p>Thank you, Readers.</p><blockquote>I hope you have found this article to be informative and helpful. As a creator, I am dedicated to providing valuable insights and analysis on cryptocurrency, stock market, blockchain, AI/ML field and other technologies.</blockquote><blockquote>If you have enjoyed this article and would like to support my ongoing efforts, I would be honored to have you as a member of my Patreon community (I usually share Algorithmic codes for trading on cryptocurrency and other different asset classes). As a member, you will have access to exclusive content, early access to new analysis, and the opportunity to be a part of shaping the direction of my research.</blockquote><blockquote>Membership starts at just $4, and you can choose to contribute on a monthly basis. Your support will help me to continue to produce high-quality content and bring you the latest insights on latest technologies.</blockquote><blockquote>Patreon — <a href="https://patreon.com/pppicasso">https://patreon.com/pppicasso</a></blockquote><blockquote>Show some love to this project! Toss a coin to our wallet: 🪙🚀<br> USDT of my Crypto Wallet account Address —</blockquote><blockquote>0x6b7d9c7537ba0cc1494429b8dc21a23f75aa826d (BEP20 BSC, ERC20 Ethereum chain, Polygon POS (ends with contract address — 58e8f) Smart chain)</blockquote><blockquote>TLmtw1r1emN4M39LgUArp7xGKmj6Mmd5cs (Tron chain TRC20 for USDT)</blockquote><blockquote>8PEi2c1oiVNeQqRZLw3rUaN3oCtZ28o4nkCiuLgxoiiA (Solana Chain for USDT)</blockquote><blockquote>For Indian Viewers who like to support can gpay or UPI — pichupicasso@oksbi</blockquote><p>Regards,</p><h4><strong><em>Puranam Pradeep Picasso Sharma</em></strong></h4><p><strong><em>Linkedin — </em></strong><a href="https://www.linkedin.com/in/puranampradeeppicasso/"><strong><em>https://www.linkedin.com/in/puranampradeeppicasso/</em></strong></a></p><p><strong><em>Patreon — </em></strong><a href="https://patreon.com/pppicasso"><strong><em>https://patreon.com/pppicasso</em></strong></a></p><p><strong><em>Facebook — </em></strong><a href="https://www.facebook.com/puranam.p.picasso/"><strong><em>https://www.facebook.com/puranam.p.picasso/</em></strong></a></p><p><strong><em>Twitter — </em></strong><a href="https://twitter.com/picasso_999"><strong><em>https://twitter.com/picasso_999</em></strong></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=caf75b4280a0" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Tradingview Strategy with 80% win rate!]]></title>
            <link>https://imbuedeskpicasso.medium.com/tradingview-strategy-with-80-win-rate-c234991183bc?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/c234991183bc</guid>
            <category><![CDATA[crypto-trading]]></category>
            <category><![CDATA[crypto]]></category>
            <category><![CDATA[tradingview]]></category>
            <category><![CDATA[strategy]]></category>
            <category><![CDATA[stock-market]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Sun, 10 Nov 2024 08:18:31 GMT</pubDate>
            <atom:updated>2024-11-10T08:18:31.089Z</atom:updated>
            <content:encoded><![CDATA[<h3>Tradingview Strategy with 80% win rate! works with cryptocurrencies, stocks, forex, commodity (Included live results screenshots)</h3><h3>Introduction</h3><p>In this article, I’ll walk you through the logic behind my multi-asset TradingView strategy, <strong><em>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3</em></strong>. This versatile strategy has demonstrated an 80% win rate and a profit factor above 3 for BTCUSDT. The same setup can be applied across multiple asset classes, including crypto, stocks, forex, and commodities, with varying timeframe settings. Here, I’ll outline the core indicators and settings I used and explain how this strategy adapts to different markets and timeframes without revealing the exact code.</p><blockquote><em>The </em><strong><em>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3</em></strong><em> Trading Strategy is a versatile tool designed for traders in cryptocurrency, forex, and stock markets. With custom stop-loss management, multiple indicators for trend confirmation, and adaptability across timeframes, this strategy offers a structured approach for multiple trading styles including scalping, day trading, and swing trading.</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cb_4q7rsY_PKBZiuhtKlYQ.png" /><figcaption>tradingview strategy with 80% win rate BTCUSDT</figcaption></figure><h3>Strategy Overview</h3><p>The strategy is flexible enough to be effective on assets like crypto pairs (BTCUSDT, ETH), stocks (such as Nvidia), and forex pairs (like USDJPY). You can tailor the timeframe for each market to optimize results. Examples of tested timeframes include 1m, 3m, 15m, 1h, and 4h for various assets.</p><ul><li><strong>TradingView Link</strong>: <a href="https://www.tradingview.com/script/cQojEoXA-VA-PPP-Multi-Asset-Trading-Strategy-for-Crypto-Forex-and-Stock/">View the strategy results on TradingView</a></li></ul><h3>Key Indicators and Logic</h3><h4>1. Moving Averages (MA)</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/476/1*6LZF5KQrUxc2J6qHNgkYmw.png" /><figcaption>Tradingview strategy with high win rate inputs Settings</figcaption></figure><ul><li><strong>Types</strong>: The strategy supports three types of MAs — Exponential (EMA), Simple (SMA), and Weighted (WMA).</li><li><strong>Length</strong>: Set to 100 by default, the MA serves as a trend filter, helping to identify the overall direction.</li><li><strong>Purpose</strong>: The strategy only considers long positions above the MA and short positions below it, ensuring trades align with the primary trend.</li></ul><h4>2. UT Bot for Entry and Exit Signals</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/442/1*TwQjIVESs3ffFcgX5uxpqw.png" /><figcaption>Tradingview strategy with high win rate inputs Settings</figcaption></figure><ul><li><strong>UT Bot Key Value</strong>: This multiplier, set to 4 by default, adjusts the sensitivity of entry and exit points.</li><li><strong>ATR Period</strong>: Adjusted through the settings, it smooths the signals by incorporating volatility.</li><li><strong>Heikin Ashi Candle Option</strong>: Switching to Heikin Ashi candles provides a smoother data source, reducing noise and offering clearer signals.</li><li><strong>Crossover Signals</strong>: Buy signals trigger when the source price moves above the trailing stop, and sell signals occur when it falls below.</li></ul><h4>3. Elder’s Weight Oscillator (EWO)</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/438/1*-6V02ROUK-tnJvPpQA_kLA.png" /><figcaption>Tradingview strategy with high win rate inputs Settings</figcaption></figure><ul><li><strong>Fast and Slow SMAs</strong>: The EWO uses a pair of SMAs (default settings: 16 and 26) to measure momentum.</li><li><strong>Positive/Negative Thresholds</strong>: When EWO is positive, the strategy focuses on overbought RSI ranges for shorts, and when negative, it focuses on oversold ranges for longs.</li><li><strong>Enhanced Momentum</strong>: The EWO enables the strategy to respond to shifts in market momentum, aligning with bullish or bearish biases.</li></ul><h4>4. Stochastic RSI (Stoch RSI)</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/413/1*39NB8yNBeLZxx4MYIuas9g.png" /><figcaption>Tradingview strategy with high win rate inputs Settings</figcaption></figure><ul><li><strong>Multi-Timeframe Flexibility</strong>: Using different timeframes, the Stoch RSI provides granular insights into potential reversal or breakout points.</li><li><strong>Conditions for Overbought/Oversold</strong>: This acts as a secondary confirmation. Combined with EWO, it’s used to identify potential turning points within trends, such as retracements or continuations.</li></ul><h4>5. Stop Loss and Take Profit Mechanisms</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/421/1*6Eo-mMhbkJ_jshEMeXQDpg.png" /><figcaption>Tradingview strategy with high win rate inputs Settings</figcaption></figure><ul><li><strong>SLRatio</strong>: The default 1.25 SLratio balances the trade-off between risk and reward, aiming to secure profits while limiting losses.</li><li><strong>Percentage vs. Lookback-Based SL</strong>: The strategy allows a choice between percentage-based SL and lookback-based SL, with the latter relying on historical highs/lows.</li><li><strong>Dynamic TP and SL Adjustments</strong>: These features adapt based on leverage settings, enhancing the strategy’s risk management flexibility across different market environments.</li></ul><h4>6. TrendAlert Logic</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/521/1*_0Lex9Q9038_9vMUJ2v-ZA.png" /><figcaption>Tradingview strategy with high win rate alert Settings</figcaption></figure><ul><li><strong>Multi-Timeframe Trend Assessment</strong>: The strategy leverages both long-term and mid-term Heikin Ashi calculations, supported by a 20-period EMA, to gauge trend direction.</li><li><strong>Dynamic Trend Confirmation</strong>: Using cross-validation between daily and 4-hour trends, this component filters trades that align with the broader trend, increasing the likelihood of successful trades.</li></ul><h4><strong>7. TrendAlert Logic: Multi-Timeframe Trend Confirmation</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/483/1*w87g7fQ1p_QR8tqhqVib4g.png" /><figcaption>Tradingview strategy with high win rate inputs Settings</figcaption></figure><ul><li>The <strong>TrendAlert Logic</strong> is a significant component added to enhance accuracy. It uses a daily (long-term) and a 4-hour (mid-term) Heikin Ashi analysis to identify trend direction.</li><li>An additional 20-period EMA on the mid-term timeframe’s Heikin Ashi close price serves as a dynamic trend indicator.</li><li>The strategy will only enter long trades if both the mid-term and long-term trends are bullish, and only short trades if both are bearish, ensuring trades align with the broader market trend.</li></ul><h4><strong>8. Position Management and Serial Labeling for Entries</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FvbNK_wsdXyBvNOxvBqf4A.png" /><figcaption>buy/sell tradingview strategy with serial number for entry and exit with SL/TP stoploss and take profit</figcaption></figure><ul><li>The code includes labels to track each entry and exit with unique serial numbers, making it easier to analyze individual trades.</li><li>Alerts are integrated with information about the status of various toggles and filters, providing insights into why a particular trade was taken or avoided based on the strategy’s logic.</li></ul><h4>9. Time Zone Filter</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/477/1*TxWCHjAvmT3eDq6G51DWDw.png" /><figcaption>tradingview strategy with multiple time zone to filter various time zones as needed during trading or can switch it off as well</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/218/1*f8oYWqlNUA_l1iR2dHjOPw.png" /><figcaption>toggle buy/sell strategy or can activate long only strategy if doing spot trading instead of futures/options</figcaption></figure><ul><li><strong>Time-Based Trading Windows</strong>: Configurable time zones allow the strategy to avoid lower-liquidity periods or times of higher volatility, aligning trades with optimal market hours.</li><li><strong>Basic Settings:</strong></li><li>The strategy starts with settings such as initial capital, position sizing, and commission rate, which help control trading costs and maintain consistency across trades.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/369/1*xJgVnt13bshKh3GazoRCfg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/446/1*vxXGgAAjfuow1dc-W7lbrw.png" /><figcaption>tradingview strategy with Properties being set within customization</figcaption></figure><ul><li>The show_strategy_buy and show_strategy_sell options allow users to display or hide buy/sell signals on the chart.</li><li>The only_long_trades toggle restricts trades to long positions only if activated, ideal for bullish markets.</li><li>Always use 5 pyramiding with “Recalculate” having After ortder is filled and on every tick toggle ON or set to True.</li><li><strong>Time Zone Filter:</strong></li><li>This feature allows users to filter trades based on specific time zones, which is particularly useful for avoiding periods of low volatility or high unpredictability. When enabled, the strategy only trades within predefined time windows, enhancing the quality of signals.</li></ul><h3>Tested Timeframes and Examples</h3><p>I’ve found that different timeframes work best depending on the asset:</p><ul><li><strong>Crypto</strong>: Works well on 1m, 3m, 15m, and 1h timeframes.</li><li><strong>Stocks</strong>: For assets like Nvidia, 15m and 1h timeframes provide strong results.</li><li><strong>Forex (USDJPY)</strong>: Effective in 15m and 1h settings, where the strategy adapts to forex trends effectively.</li><li><strong>Commodities</strong>: Tested on various commodities with suitable adjustments for timeframe and ATR settings.</li></ul><h3>Screenshots and Visuals</h3><p>I’ll provide screenshots of how the strategy performs on different assets, highlighting setups and successful trades on assets like Nvidia stock and USDJPY forex pair.</p><p>Here is the screenshot of few trades taken on Binance Futures using tradingview strategy<strong><em> `PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3`</em></strong></p><p>alerts</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iIXgNG2kv1Yhob-4AeIKLQ.png" /><figcaption>binance futures results with small entry amounts part 1</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*akGAKHW6RHqLZsPRirfpmw.png" /><figcaption>binance futures results with small entry amounts part 2</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*X7f8mPa1k_n59auGKEqJIA.png" /><figcaption>binance futures results with small entry amounts part 3</figcaption></figure><blockquote><em>Use this strategy with confidence across different assets and timeframes. Below are examples of backtest results on various assets and timeframes.</em></blockquote><h4><strong>ETHUSDT, 3m timeframe with 69%+ win rate</strong></h4><p>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V2 (EMA, 100, 1, 1, 0.1, 4, 1, OFF, close, 16, 26, ON , CHART, 3, 3, 14, 14, 30, 50, 70, 90, On, 50, 70, 50, 70, On, 50, 70, 50, 70, On, 30, 50.01, 30, 50, On, 1.25, 1.25, On, 1.5, 1.5, 96, 96, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_Mm2GnR10Vth7fwS1rMNPA.png" /><figcaption>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 tradingview strategy on 3m timeframe ETHUSDT with high win rate</figcaption></figure><h4><strong>ETHUSDT, 15m timeframe with 80% win rate</strong></h4><p>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset (EMA, 100, 1, 1, 0.1, 4, 1, close, 16, 26, , 3, 3, 14, 14, 30, 50, 70, 90, 50, 70, 50, 70, 50, 70, 50, 70, 30, 50.01, 30, 50, 1.25, 1.25, 1.5, 1.5, 96, 96, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*G0nNalab6ijoOSG7zfb2IA.png" /><figcaption>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 tradingview strategy on 15m timeframe ETHUSDT with high win rate</figcaption></figure><h4><strong>BTCUSDT, 1h timeframe with 68%+ win rate</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Gd7EgM8TUuDmWs2aQKNMtg.png" /><figcaption>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 tradingview strategy on 1h timeframe BTCUSDT with high win rate</figcaption></figure><h4><strong>USDJPY, 3m timeframe with 60%+ win rate</strong></h4><p>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 (true, true, false, true, EMA, 100, 1, 1, 0.1, 2, 1, false, close, 16, 26, true, , 1, 3, 14, 14, 30, 50, 70, 90, true, 50, 70, 50, 70, true, 50, 70, 50, 70, true, 30, 50.01, 30, 50, true, 1.25, 1.25, true, 1.5, 1.5, 96, 96, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345, D, 30, 20)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EsXCi6uUoaIME4_LU_YCQw.png" /><figcaption>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 tradingview strategy on 3m timeframe USDJPY with high win rate</figcaption></figure><h4>NVDA, 3m timeframe with 54.55% win rate</h4><p>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 (true, true, false, true, EMA, 100, 1, 1, 0.1, 1, 1, false, close, 16, 26, true, , 1, 3, 14, 14, 30, 50, 70, 90, true, 50, 70, 50, 70, true, 50, 70, 50, 70, true, 30, 50.01, 30, 50, true, 1.25, 1.25, false, 2, 2, 300, 300, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345, D, 30, 20)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*S1xucdPh0bBmVzPhxghjkw.png" /><figcaption>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 tradingview strategy on 3m timeframe NVDA (Nvidia) with high win rate</figcaption></figure><h4>ETHUSDT.P with 1m timeframe giving 60%+ win rate</h4><p>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 (true, true, false, true, EMA, 400, 1, 1, 0.1, 2, 1, false, close, 16, 26, true, , 1, 3, 14, 14, 30, 50, 70, 90, true, 50, 70, 50, 70, true, 50, 70, 50, 70, true, 30, 50.01, 30, 50, true, 1.25, 1.25, true, 2, 2, 600, 600, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345, D, 30, 20)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iwxNSw98X8p-CIXacNlQ4A.png" /><figcaption>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 tradingview strategy on 1m timeframe ETHUSDT with high win rate</figcaption></figure><h4>Timeframes and Recommended Use:</h4><h4><strong>1. 3m Timeframe:</strong></h4><p>Ideal for fast-moving assets like cryptocurrencies or forex pairs; particularly suited for high-frequency traders and scalpers who want high reactivity to price moves.</p><h4><strong>2. 15m Timeframe:</strong></h4><p>This timeframe works well for day trading, capturing short-term trends while filtering out excessive noise.</p><h4><strong>3. 1h Timeframe:</strong></h4><p>Offers balance between intraday and swing trading, ideal for stocks and longer-term forex positions.</p><blockquote>I have conducted backtests on Binance Futures from January 1st, 2023, to November 11th, 2024, to identify profitable assets and optimize settings. Below, I’ve listed recommended settings for specific trading pairs. For best results, add “.USDT.P” at the end of each asset name when setting up.</blockquote><blockquote>Please remember to conduct your own testing before using any recommended settings, as individual results may vary. I am not a financial advisor or certified expert; consult with qualified professionals before making any real trades. Trading and investing carry risks and require thorough knowledge and skill, so ensure you are well-informed before proceeding.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/482/1*oUM5aSo3PVGj2SMzUKu__A.png" /><figcaption>make sure to use same properties settings while testing tradingview strategy</figcaption></figure><h4>4. Custom Configuration of Tradingview Strategy for 15M Timeframe for Cryptocurrencies:</h4><p><strong>Config:</strong> PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 (true, true, false, true, EMA, 100, 1, 1, 0.1, 4, 1, false, close, 16, 26, true, , 3, 3, 14, 14, 30, 50, 70, 90, true, 50, 70, 50, 70, true, 50, 70, 50, 70, true, 30, 50.01, 30, 50, true, 1.25, 1.25, false, 1.5, 1.5, 96, 96, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345, 5, 5, 20)</p><p><strong>Assets:</strong> BTCUSDT.P, AAVE, WIF, YGG, ICP, 1000BONK, ETC, IOST, HBAR, NEAR, WLD, BAT, NEO, MKR, CHZ, CFX, SOL, XTZ, ICX, UNI, ROSE, ETH, ENS, NKN, TRB, MTL, CTSI, OM, SPELL</p><p><strong>Config:</strong> PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 (true, true, false, true, EMA, 100, 1, 1, 0.1, 4, 1, false, close, 16, 26, true, , 3, 3, 14, 14, 30, 50, 70, 90, true, 50, 70, 50, 70, true, 50, 70, 50, 70, true, 30, 50.01, 30, 50, true, 1.25, 1.25, true, 1.5, 1.5, 96, 96, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345, 5, 5, 20)</p><p><strong>Assets:</strong> BIGTIME, WOO, STORJ, PERP, IOTX, ETH, CHZ, YGG, ORDI, AVAX</p><h4>5. Custom Configuration of Tradingview Strategy for 1h Timeframe for Cryptocurrencies:</h4><p><strong>Config:</strong> PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 (true, true, false, true, EMA, 100, 1, 1, 0.1, 4, 1, false, close, 16, 26, true, , 3, 3, 14, 14, 30, 50, 70, 90, true, 50, 70, 50, 70, true, 50, 70, 50, 70, true, 30, 50.01, 30, 50, true, 1.25, 1.25, false, 1.5, 1.5, 96, 96, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345, 5, 5, 20)</p><p><strong>Assets:</strong> BTC, ETH, ADA, APT, DOT, DOGE, ICP, VET, XLM, WIF, ORDI, IOST, QTUM, HBAR, NEAR, LRC, YGG, BICO, ZIL, CFX, ONE, LUNA2, GMT, GRT, SOL, HOT, ATOM, ATA, DUSK, UNI, DYDX, LQTY, CELR, STMX, ALPHA, BAKE, RLC, IOTX, FLM, FTM, SAND, ZRX, SKL, BNB, TRB, API3, NTRN, DASH, APE, REEF, SXP, ZEC</p><h4>6. Custom Configuration of Tradingview Strategy for 3M Timeframe for Cryptocurrencies:</h4><p><strong>Config: </strong>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 (true, true, false, true, EMA, 100, 1, 1, 0.1, 3, 1, false, close, 3, 45, true, , 1, 3, 14, 14, 30, 50, 70, 90, true, 50, 70, 50, 70, true, 50, 70, 50, 70, true, 30, 50.01, 30, 50, true, 1.5, 1.5, false, 2, 2, 96, 96, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345, 240, 30, 55)</p><p><strong>Assets:</strong> BTC, JASMY, BNX, LTC, MKR, DUSK, ALPHA, BAKE, STORJ, NTRN</p><p><strong>Config: </strong>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3 (true, true, false, true, EMA, 600, 1, 3, 0.1, 2, 1, false, close, 16, 25, true, , 3, 1, 18, 21, 30, 50, 70, 90, true, 70, 70, 70, 70, false, 90, 90, 70, 70, false, 20, 45, 20, 45, true, 2, 2, true, 2, 1.75, 300, 300, 0000–2345:12345, 0600–1630:12345, 0800–1300:12345, 1D, 120, 12)</p><p><strong>Assets: </strong>ETH, WIF, W, HBAR, MKR, CFX</p><blockquote><em>Use tradingview </em><strong><em>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3</em></strong><em> strategy for a systematic approach to different market conditions, ensuring optimized entries and exits with built-in flexibility across asset classes and timeframes.</em></blockquote><p><em>Tip: Settings need to be manually adjusted for different assets and timeframes to achieve optimal results, with win rates between 45–80% achievable through fine-tuning.</em></p><h3>Conclusion</h3><p>This multi-asset strategy delivers robust performance across asset classes and timeframes by combining trend-following and momentum indicators. With configurable time zone filters, stop losses, and moving averages, it adapts to a wide range of markets.</p><p>If you’re interested in exploring this strategy further, you can view it on <a href="https://www.tradingview.com/script/cQojEoXA-VA-PPP-Multi-Asset-Trading-Strategy-for-Crypto-Forex-and-Stock/">TradingView</a> and purchase it on <a href="https://www.patreon.com/pppicasso/shop/tradingview-strategy-multi-asset-for-and-609895">Patreon</a>.</p><p>Feel free to connect with me on <a href="https://patreon.com/pppicasso">Patreon</a>, <a href="https://www.linkedin.com/in/puranampradeeppicasso/">LinkedIn</a>, or Twitter <a href="https://x.com/picasso_999">@picasso_999</a> for insights and updates.</p><p><strong>Purchase Link</strong>: <a href="https://www.patreon.com/pppicasso/shop/tradingview-strategy-multi-asset-for-and-609895">Get this strategy on Patreon</a></p><blockquote><em>get entire code and profitable algos @ </em><a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link"><em>https://patreon.com/pppicasso</em></a></blockquote><p><strong><em>Disclaimer:</em></strong><em> Trading involves risk. Past performance is not indicative of future results. </em><strong><em>PPP_VishvaAlgo_3m_15m_1h_Crypto_MultiAsset_V3</em></strong><em> is a tool to assist traders and does not guarantee profits. Please trade responsibly and conduct thorough research before making investment decisions.</em></p><p>Warm Regards,</p><p><strong>Puranam Pradeep Picasso</strong></p><p><strong>Linkedin</strong> — <a href="https://www.linkedin.com/in/puranampradeeppicasso/">https://www.linkedin.com/in/puranampradeeppicasso/</a></p><p><strong>Patreon </strong>— <a href="https://patreon.com/pppicasso">https://patreon.com/pppicasso</a></p><p><strong>Facebook </strong>— <a href="https://www.facebook.com/puranam.p.picasso/">https://www.facebook.com/puranam.p.picasso/</a></p><p><strong>Twitter</strong> — <a href="https://twitter.com/picasso_999">https://twitter.com/picasso_999</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c234991183bc" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[66941.5% Returns in Testing and 900+ Live Trades in Action: A Journey Through Time Series Ensemble…]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://imbuedeskpicasso.medium.com/66941-5-returns-in-testing-and-900-live-trades-in-action-a-journey-through-time-series-ensemble-7ad4b833ae9f?source=rss-f3467d786018------2"><img src="https://cdn-images-1.medium.com/max/1920/0*kRAGUL6BkUiSdfgU.jpg" width="1920"></a></p><p class="medium-feed-snippet">Unleashing the power of Neural Networks for creating Trading Bot for maximum profits.</p><p class="medium-feed-link"><a href="https://imbuedeskpicasso.medium.com/66941-5-returns-in-testing-and-900-live-trades-in-action-a-journey-through-time-series-ensemble-7ad4b833ae9f?source=rss-f3467d786018------2">Continue reading on Medium »</a></p></div>]]></description>
            <link>https://imbuedeskpicasso.medium.com/66941-5-returns-in-testing-and-900-live-trades-in-action-a-journey-through-time-series-ensemble-7ad4b833ae9f?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/7ad4b833ae9f</guid>
            <category><![CDATA[crypto-trading]]></category>
            <category><![CDATA[neural-networks]]></category>
            <category><![CDATA[algorithmic-trading]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[cryptocurrency-investment]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Tue, 03 Sep 2024 23:11:59 GMT</pubDate>
            <atom:updated>2024-09-03T23:11:59.393Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[Unlocking 152,293% Returns on ETH: Did Neural Networks Overcome Overfitting with a TCN Model?]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://imbuedeskpicasso.medium.com/unlocking-152-293-returns-on-eth-did-neural-networks-overcome-overfitting-with-a-tcn-model-9b8322e577bb?source=rss-f3467d786018------2"><img src="https://cdn-images-1.medium.com/max/640/0*WObsRQqEgqWjs9Fc.jpg" width="640"></a></p><p class="medium-feed-snippet">Unleashing the power of Neural Networks for creating Trading Bot for maximum profits.</p><p class="medium-feed-link"><a href="https://imbuedeskpicasso.medium.com/unlocking-152-293-returns-on-eth-did-neural-networks-overcome-overfitting-with-a-tcn-model-9b8322e577bb?source=rss-f3467d786018------2">Continue reading on Medium »</a></p></div>]]></description>
            <link>https://imbuedeskpicasso.medium.com/unlocking-152-293-returns-on-eth-did-neural-networks-overcome-overfitting-with-a-tcn-model-9b8322e577bb?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/9b8322e577bb</guid>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[cryptocurrency-investment]]></category>
            <category><![CDATA[crypto-trading]]></category>
            <category><![CDATA[neural-networks]]></category>
            <category><![CDATA[algorithmic-trading]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Tue, 03 Sep 2024 08:35:55 GMT</pubDate>
            <atom:updated>2024-09-03T08:35:55.672Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[720+% Returns in 3 years on Cryptocurrency using LSTM Neural Network Model and short listing Best…]]></title>
            <link>https://imbuedeskpicasso.medium.com/720-returns-in-3-years-on-cryptocurrency-using-lstm-neural-network-model-and-short-listing-best-6229f941b823?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/6229f941b823</guid>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[cryptocurrency-investment]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[neural-networks]]></category>
            <category><![CDATA[algorithmic-trading]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Sun, 23 Jun 2024 14:44:00 GMT</pubDate>
            <atom:updated>2024-06-23T14:44:00.993Z</atom:updated>
            <content:encoded><![CDATA[<h3>720+% Returns in 3 years on Cryptocurrency using LSTM Neural Network Model and short listing Best Assets for Trading — VishvaAlgo Machine Learning Trading Bot</h3><p>Unleashing the power of Neural Networks for creating Trading Bot for maximum profits.</p><h3>Introduction:</h3><p>Welcome to the world of algorithmic trading and machine learning, where innovation meets profitability. Over the past three years, I’ve dedicated myself to developing algorithmic trading systems that harness the power of various strategies. Through relentless experimentation and refinement, I’ve achieved impressive returns across multiple strategies, delighting members of<a href="https://www.patreon.com/pppicasso"><strong><em> my Patreon community with consistent profits</em></strong></a>.</p><p>In the pursuit of excellence, I recently launched <a href="https://www.patreon.com/pppicasso/shop"><strong><em>VishvaAlgo, a machine learning-based algorithmic trading system that leverages neural network classification models</em></strong></a><strong><em>.</em></strong> This cutting-edge platform has already demonstrated remarkable results, delivering exceptional returns to traders in the cryptocurrency market. Through a series of articles and practical demonstrations, I’ve shared insights on transitioning from traditional algorithmic trading to deploying practical machine learning models, showcasing their effectiveness in real-world trading environments.</p><p>In this article, we delve into the trans-formative potential of algorithmic trading and machine learning, focusing on the effectiveness of neural networks, specifically the LSTM technique. Building upon our past successes, we set out to demonstrate the remarkable profitability achievable with advanced machine learning models, using Bitcoin (BTC) and Ethereum (ETH) as our primary assets.</p><p>Our analysis focuses on Ethereum pricing in USDT, utilizing 15-minute candlestick data spanning from January 1st, 2021, to October 22nd, 2023, comprising over 97,000 rows of data and more than 190 features. By leveraging neural network models for prediction, we aim to identify optimal long and short positions, showcasing the potential of deep learning in financial markets.</p><blockquote>Our story is one of relentless innovation, fueled by a burning desire to unlock the full potential of Deep Learning in the pursuit of profit. In this article, we invite you to join us as we unravel the exciting tale of our transformation from humble beginnings to groundbreaking success.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/685/0*k43iwLlU8G4Opyxf.png" /><figcaption>LSTM classification time series model for crypto</figcaption></figure><h3>Our Algorithmic Trading Vs/+ Machine Learning Vs/+ Deep Learning Journey so far?</h3><h4>Stage 1:</h4><p>We have developed a crypto Algorithmic Strategy which gave us huge profits when ran on multiple crypto assets (138+) with a profit range of 8787%+ in span of 3 years (almost).</p><h4>“The 8787%+ ROI Algo Strategy Unveiled for Crypto Futures! Revolutionized With Famous RSI, MACD, Bollinger Bands, ADX, EMA” — <a href="https://imbuedeskpicasso.medium.com/the-8787-roi-algo-strategy-unveiled-for-crypto-futures-22a5dd88c4a5">Link</a></h4><p>We have run live trading in dry-run mode for the same for 7 days and details about the same have been shared in another article.</p><h4>“Freqtrade Revealed: 7-Day Journey in Algorithmic Trading for Crypto Futures Market” — <a href="https://imbuedeskpicasso.medium.com/freqtrade-revealed-7-day-journey-in-algorithmic-trading-for-crypto-futures-market-1032c409d6bd">Link</a></h4><p>After<strong> successful backtest results and forward testing</strong> (live trading in dry-run mode), we planned to improve the odds of making more profit for the same. (To lower stop-losses, increase odds of winning more , reduce risk factor and other important things)</p><h4>Stage 2:</h4><p>We have worked on developing a strategy alone without freqtrade setup (avoiding trailing stop loss, multiple asst parallel running, higher risk management setups that freqtrade provides for free (it is a free open source platform) and then tested it in market, then optimized it using hyper parameters and then , we got some +ve profits from the strategy</p><h4>“How I achieved 3000+% Profit in Backtesting for Various Algorithmic Trading Bots and how you can do the same for your Trading Strategies — Using Python Code” — <a href="https://medium.com/p/b1de0d20cd39">Link</a></h4><h4>Stage 3:</h4><p>As we have tested our strategy only on 1 Asset , i.e; BTC/USDT in crypto market, we wanted to know if we can segregate the whole collective assets we have (Which we have used for developing Freqtrade Strategy earlier) segregate them into different clusters based on their volatility, it becomes easy to do trading for certain volatile assets and won’t hit huge stop-losses for others if worked on implementing based on coin volatility.</p><p>We used <strong>K-nearest Neighbors (KNN Means)</strong> to identify different clusters of assets out of 138 crypto assets we use in our freqtrade strategy, which gave us 8<strong>000+% profits</strong> during backtest.</p><h4>“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -1 (K-Nearest Neighbors)” — <a href="https://medium.com/p/0c143a6ab7cb">Link</a></h4><h4>Stage 4:</h4><p>Now, we want to introduce Unsupervised Machine Learning model — Hidden Markov Model (HMMs) to identify trends in the market and trade during only profitable trends and avoid sudden pumps, dumps in market, avoid negative trends in market. Below explanation unravels the same.</p><h4>“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -2 (Hidden Markov Model — HMM)” — <a href="https://imbuedeskpicasso.medium.com/hyper-optimized-algorithmic-strategy-vs-machine-learning-models-part-2-hidden-markov-model-98e4894e3d9e">Link</a></h4><h4>Stage 5:</h4><p>I worked on using XGBoost Classifier to identify long and short trades using our old signal. Before using it, we ensured that the signal algorithm we had previously developed was hyper-optimized. Additionally, we introduced different stop-loss and take-profit parameters for this setup, causing the target values to change accordingly. We also adjusted the parameters used for obtaining profitable trades based on the stop-loss and take-profit values. Later, we tested the basic XGBClassifier setup and then enhanced the results by adding re-sampling methods. Our target classes, which include 0’s (neutral), 1’s (for long trades), and 2’s (for short trades), were imbalanced due to the trade execution timing. To address this imbalance, we employed re-sampling methods and performed hyper-optimization of the classifier model. Subsequently, we evaluated if the model performed better with other classifier models such as SVC, CatBoost, and LightGBM, in combination with LSTM and XGBoost. Finally, we concluded by analyzing the results and determining feature importance parameters to identify the most productive features.</p><h4>“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -3 (XGBoost Classifier , LGBM Classifier, CatBoost Classifier, SVC, LSTM with XGB and Multi level Hyper-optimization)” — <a href="https://imbuedeskpicasso.medium.com/hyper-optimized-algorithmic-strategy-vs-machine-learning-models-part-3-xgboost-classifier-6c4f49c58800">Link</a></h4><h4>Stage 6:</h4><p>In that stage, I utilized the CatBoostClassifier along with resampling and sample weights. I incorporated multiple time frame indicators such as volume, momentum, trend, and volatility into my model. After running the model, I performed ensembling techniques to enhance its overall performance. The results of my analysis showed a significant increase in profit from 54% to over 4600% during backtesting. Additionally, I highlighted the impressive performance metrics including recall, precision, accuracy, and F1 score, all exceeding 80% for each of the three trading classes (0 for neutral, 1 for long, and 2 for short trades).</p><h4>“From 54% to a Staggering 4648%: Catapulting Cryptocurrency Trading with CatBoost Classifier, Machine Learning Model at Its Best” — <a href="https://imbuedeskpicasso.medium.com/from-54-to-a-staggering-4648-catapulting-cryptocurrency-trading-with-catboost-classifier-75ac9f10c8fc">Link</a></h4><h4>Stage 7:</h4><p>In this stage, the <strong><em>ensemble method combining TCN and LSTM neural network models</em></strong> has demonstrated exceptional performance across various datasets, outperforming individual models and even surpassing buy and hold strategies. This underscores the effectiveness of ensemble learning in improving prediction accuracy and robustness.</p><h4>“Bitcoin/BTC 4750%+ , Etherium/ETH 11,270%+ profit in 1023 days using Neural Networks, Algorithmic Trading Vs/+ Machine Learning Models Vs/+ Deep Learning Model Part — 4 (TCN, LSTM, Transformer with Ensemble Method)” — <a href="https://medium.com/p/d5a644cdc36f/">Link</a></h4><h4>Stage 8:</h4><p>Experience the future of trading with VishvaAlgo v3.8. With its advanced features, unparalleled risk management capabilities, and ease of integration of ML and neural network models, VishvaAlgo is the ultimate choice for traders seeking consistent profits and peace of mind. Don’t miss out on this opportunity to revolutionize your trading journey.</p><blockquote><strong><em>Purchase Link:</em></strong><em> </em><a href="https://www.patreon.com/pppicasso/shop/vishvaalgo-v3-0-live-crypto-trading-170240?source=storefront">VishvaAlgo V3.8 Live Crypto Trading Using Machine Learning Model</a></blockquote><h4>“VishvaAlgo v3.0 — Revolutionize Your Live Cryptocurrency Trading system Enhanced with Machine Learning (Neural Network) Model. Live Profits Screenshots Shared” — <a href="https://medium.com/p/f4ca0facae7e/">Link</a></h4><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><h3>The code Explanation:</h3><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><pre># Remove Future Warnings<br>import warnings<br>warnings.simplefilter(action=&#39;ignore&#39;, category=FutureWarning)<br><br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br># General<br>import numpy as np<br># Data Management<br>import pandas as pd<br># Machine Learning<br>from catboost import CatBoostClassifier<br>from sklearn.model_selection import train_test_split<br>from sklearn.model_selection import RandomizedSearchCV, cross_val_score<br>from sklearn.model_selection import RepeatedStratifiedKFold<br>from sklearn.linear_model import LogisticRegression<br># ensemble<br>from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier<br>from sklearn.ensemble import StackingClassifier<br>from sklearn.ensemble import VotingClassifier<br>#Sampling Methods<br>from imblearn.over_sampling import ADASYN<br>#Scaling<br>from sklearn.preprocessing import MinMaxScaler<br># Binary Classification Specific Metrics<br>from sklearn.metrics import RocCurveDisplay as plot_roc_curve<br># General Metrics<br>from sklearn.metrics import accuracy_score<br>from sklearn.metrics import precision_score<br>from sklearn.metrics import confusion_matrix, classification_report, roc_curve, roc_auc_score, accuracy_score<br>from sklearn.metrics import precision_score<br>from sklearn.metrics import ConfusionMatrixDisplay<br><br># Reporting<br>import matplotlib.pyplot as plt<br>from matplotlib.pylab import rcParams<br>from xgboost import plot_tree<br>#Backtesting<br>from backtesting import Backtest<br>from backtesting import Strategy<br>#hyperopt<br>from hyperopt import fmin, tpe, hp<br>from pandas_datareader.data import DataReader<br>import json<br>from datetime import datetime<br>import talib as ta<br>import ccxt<br># from sklearn.model_selection import train_test_split<br>from sklearn.utils import class_weight<br>from keras.models import Sequential<br>from keras.layers import LSTM, Dense, Dropout<br>from keras.optimizers import Adam<br># from keras.wrappers.scikit_learn import KerasClassifier<br>from sklearn.ensemble import VotingClassifier<br>from hyperopt import fmin, tpe, hp, STATUS_OK, Trials</pre><p><strong>Import Statements (Lines 1–18):</strong></p><ul><li><strong>Warnings (Lines 1–4):</strong></li><li>These lines suppress warnings that might appear during execution. While this can be helpful for uninterrupted training, it’s generally recommended to address the warnings themselves for better debugging and understanding potential issues.</li><li><strong>General Libraries (Lines 5–7):</strong></li><li>numpy (np): Provides numerical computing capabilities, often used for array operations and mathematical functions. Not directly applicable to article writing.</li><li>pandas (pd): Used for data manipulation, analysis, and visualization. Essential for working with structured data in articles (e.g., tables, charts).</li><li><strong>Machine Learning Libraries (Lines 8–13):</strong></li><li>catboost (not explicitly imported here): Provides a powerful gradient boosting library for machine learning tasks. Not directly relevant to article writing unless you&#39;re discussing specific machine learning algorithms.</li><li>scikit-learn (various submodules): A comprehensive machine learning library. Parts might be useful for illustrating concepts or comparing approaches in articles:</li><li>train_test_split: Splits data into training and testing sets for model evaluation.</li><li>RandomizedSearchCV, cross_val_score, RepeatedStratifiedKFold: Techniques for hyperparameter tuning and model evaluation (cross-validation).</li><li>LogisticRegression: A linear classification model. Potentially relevant if discussing classification algorithms.</li><li><strong>Ensemble Methods (Lines 14–16):</strong></li><li>scikit-learn (submodules): Techniques for combining multiple models to improve performance. Not directly applicable to article writing.</li><li><strong>Sampling Methods (Line 17):</strong></li><li>imblearn: Provides tools for handling imbalanced datasets (where classes have unequal sizes). Not typically used in article writing itself.</li><li><strong>Scaling (Line 18):</strong></li><li>scikit-learn: Techniques for normalizing or standardizing data (often necessary for machine learning models). Can be relevant in articles to explain data preprocessing steps.</li></ul><p><strong>Metrics (Lines 19–33):</strong></p><ul><li><strong>Binary Classification Metrics (Lines 19–21):</strong></li><li>scikit-learn: Used to evaluate the performance of classification models, particularly for binary classification (two classes). Not directly applicable to article writing unless discussing model evaluation metrics.</li><li><strong>General Metrics (Lines 22–33):</strong></li><li>scikit-learn: Various metrics for evaluating model performance across different classification tasks. Can be useful in articles to explain how models are assessed:</li><li>accuracy_score: Proportion of correct predictions.</li><li>precision_score: Proportion of true positives among predicted positives.</li><li>confusion_matrix: Visualization of how many instances were classified correctly or incorrectly for each class.</li><li>classification_report: Detailed report on model performance, including precision, recall, F1-score, and support for each class.</li><li>roc_curve, roc_auc_score: Measures for assessing the Receiver Operating Characteristic (ROC) curve, which helps evaluate a model&#39;s ability to discriminate between classes.</li></ul><p><strong>Reporting (Lines 34–36):</strong></p><ul><li>matplotlib.pyplot (plt): Used for creating visualizations like charts and graphs. Essential for presenting data and model results in articles.</li></ul><p><strong>Backtesting (Lines 37–38):</strong></p><ul><li>backtesting: Library for backtesting trading strategies. Not relevant to article writing unless discussing financial applications of machine learning.</li></ul><p><strong>Hyperparameter Optimization (Lines 39–42):</strong></p><ul><li>hyperopt: Library for hyperparameter tuning (finding the best settings for machine learning models). Not directly applicable to article writing.</li></ul><p><strong>Data Retrieval (Line 43):</strong></p><ul><li>pandas_datareader: Facilitates data retrieval from various financial data sources. Not typically used in article writing itself.</li></ul><p><strong>Other Imports (Lines 44–50):</strong></p><ul><li>json: For working with JSON data format (not directly used here).</li><li>datetime: For working with date and time objects. Can be useful in articles for handling time-series data.</li><li>talib: Technical analysis library for financial markets (not directly used here).</li><li>ccxt (not explicitly imported here): Library for interacting with cryptocurrency exchanges (not relevant to article writing).</li></ul><p><strong>Context:</strong></p><ul><li>Each library and module is imported with a specific purpose, such as data manipulation, machine learning, evaluation, visualization, backtesting, hyperparameter optimization, etc.</li><li>These libraries and modules will be used throughout the code for various tasks like data preprocessing, model training, evaluation, optimization, and visualization.</li></ul><pre># Define the path to your JSON file<br>file_path = &#39;./ETH_USDT_USDT-15m-futures.json&#39;<br><br># Open the file and read the data<br>with open(file_path, &quot;r&quot;) as f:<br>    data = json.load(f)<br>df = pd.DataFrame(data)<br># Extract the OHLC data (adjust column names as needed)<br># ohlc_data = df[[&quot;date&quot;,&quot;open&quot;, &quot;high&quot;, &quot;low&quot;, &quot;close&quot;, &quot;volume&quot;]]<br>df.rename(columns={0: &quot;Date&quot;, 1: &quot;Open&quot;, 2: &quot;High&quot;,3: &quot;Low&quot;, 4: &quot;Adj Close&quot;, 5: &quot;Volume&quot;}, inplace=True)<br># Convert timestamps to datetime objects<br>df[&quot;Date&quot;] = pd.to_datetime(df[&#39;Date&#39;] / 1000, unit=&#39;s&#39;)<br>df.set_index(&quot;Date&quot;, inplace=True)<br># Format the date index<br>df.index = df.index.strftime(&quot;%m-%d-%Y %H:%M&quot;)<br>df[&#39;Close&#39;] = df[&#39;Adj Close&#39;]<br># print(df.dropna(), df.describe(), df.info())<br>data = df<br>data</pre><p>To analyze historical cryptocurrency futures data, we can first load the data from a JSON file. The provided code demonstrates how to use Python’s json library to parse the JSON content into a dictionary. We then convert this dictionary into a pandas DataFrame for easier manipulation. The DataFrame is cleaned and transformed by renaming columns, converting timestamps to datetime objects, setting the date as the index, and formatting the date display for better readability.</p><p><strong>Here’s the step-by-step explanation of the code:</strong></p><p><strong>1. Loading JSON Data:</strong></p><ul><li>The code defines a file path (file_path) to a JSON file containing cryptocurrency data (presumably in the format of Open-High-Low-Close-Volume for Ethereum futures contracts traded with USDT).</li><li>It opens the file for reading (with open(file_path, &quot;r&quot;) as f:) and uses json.load(f) to parse the JSON content into a Python dictionary (data).</li></ul><p><strong>2. Converting to DataFrame:</strong></p><ul><li>The code creates a pandas DataFrame (df) from the loaded dictionary (data). A DataFrame is a tabular data structure similar to a spreadsheet, making it easier to work with and analyze the data.</li></ul><p><strong>3. Data Cleaning and Transformation:</strong></p><ul><li>This part assumes the JSON data has columns with numerical indices (0, 1, 2, etc.) instead of meaningful names. It renames these columns to more descriptive labels (&quot;Date&quot;, &quot;Open&quot;, &quot;High&quot;, &quot;Low&quot;, &quot;Adj Close&quot;, &quot;Volume&quot;) using df.rename(columns={...}, inplace=True).</li><li>It converts the &quot;Date&quot; column from timestamps (likely in milliseconds since some epoch) to datetime objects using pd.to_datetime(). This makes it easier to work with dates and perform time-based operations.</li><li>The code sets the &quot;Date&quot; column as the index of the DataFrame using df.set_index(&quot;Date&quot;, inplace=True). This allows you to efficiently access and filter data based on dates.</li><li>It formats the date index using df.index.strftime(&quot;%m-%d-%Y %H:%M&quot;) to display dates in a more readable format (e.g., &quot;05-14-2024 16:35&quot;).</li><li>Finally, it assigns the column named &quot;Adj Close&quot; (assuming it represents the adjusted closing price) to a variable named &quot;Close&quot; for potentially clearer reference.</li></ul><pre># Assuming you have a DataFrame named &#39;df&#39; with columns &#39;Open&#39;, &#39;High&#39;, &#39;Low&#39;, &#39;Close&#39;, &#39;Adj Close&#39;, and &#39;Volume&#39;<br>target_prediction_number = 2<br>time_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br>name_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br><br>df = data.copy()<br>new_columns = []<br>for period in time_periods:<br>    for nperiod in name_periods:<br>        df[f&#39;ATR_{period}&#39;] = ta.ATR(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], timeperiod=period)<br>        df[f&#39;EMA_{period}&#39;] = ta.EMA(df[&#39;Close&#39;], timeperiod=period*2)<br>        df[f&#39;RSI_{period}&#39;] = ta.RSI(df[&#39;Close&#39;], timeperiod=period*0.5)<br>        df[f&#39;VWAP_{period}&#39;] = ta.SUM(df[&#39;Volume&#39;] * (df[&#39;High&#39;] + df[&#39;Low&#39;] + df[&#39;Close&#39;]) / 3, timeperiod=period) / ta.SUM(df[&#39;Volume&#39;], timeperiod=period)<br>        df[f&#39;ROC_{period}&#39;] = ta.ROC(df[&#39;Close&#39;], timeperiod=period)<br>        df[f&#39;KC_upper_{period}&#39;] = ta.EMA(df[&#39;High&#39;], timeperiod=period*2)<br>        df[f&#39;KC_middle_{period}&#39;] = ta.EMA(df[&#39;Low&#39;], timeperiod=period*2)<br>        df[f&#39;Donchian_upper_{period}&#39;] = ta.MAX(df[&#39;High&#39;], timeperiod=period)<br>        df[f&#39;Donchian_lower_{period}&#39;] = ta.MIN(df[&#39;Low&#39;], timeperiod=period)<br>        macd, macd_signal, _ = ta.MACD(df[&#39;Close&#39;], fastperiod=(period + 12), slowperiod=(period + 26), signalperiod=(period + 9))<br>        df[f&#39;MACD_{period}&#39;] = macd<br>        df[f&#39;MACD_signal_{period}&#39;] = macd_signal<br>        bb_upper, bb_middle, bb_lower = ta.BBANDS(df[&#39;Close&#39;], timeperiod=period*0.5, nbdevup=2, nbdevdn=2)<br>        df[f&#39;BB_upper_{period}&#39;] = bb_upper<br>        df[f&#39;BB_middle_{period}&#39;] = bb_middle<br>        df[f&#39;BB_lower_{period}&#39;] = bb_lower<br>        df[f&#39;EWO_{period}&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=(period+5)) - ta.SMA(df[&#39;Close&#39;], timeperiod=(period+35))<br>        <br>    <br>df[&quot;Returns&quot;] = (df[&quot;Adj Close&quot;] / df[&quot;Adj Close&quot;].shift(target_prediction_number)) - 1<br>df[&quot;Range&quot;] = (df[&quot;High&quot;] / df[&quot;Low&quot;]) - 1<br>df[&quot;Volatility&quot;] = df[&#39;Returns&#39;].rolling(window=target_prediction_number).std()<br># Volume-Based Indicators<br>df[&#39;OBV&#39;] = ta.OBV(df[&#39;Close&#39;], df[&#39;Volume&#39;])<br>df[&#39;ADL&#39;] = ta.AD(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], df[&#39;Volume&#39;])<br><br># Momentum-Based Indicators<br>df[&#39;Stoch_Oscillator&#39;] = ta.STOCH(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;])[0]<br># Calculate the Elliott Wave Oscillator (EWO)<br>#df[&#39;EWO&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=5) - ta.SMA(df[&#39;Close&#39;], timeperiod=35)<br># Volatility-Based Indicators<br># df[&#39;ATR&#39;] = ta.ATR(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], timeperiod=14)<br># df[&#39;BB_upper&#39;], df[&#39;BB_middle&#39;], df[&#39;BB_lower&#39;] = ta.BBANDS(df[&#39;Close&#39;], timeperiod=20, nbdevup=2, nbdevdn=2)<br># df[&#39;KC_upper&#39;], df[&#39;KC_middle&#39;] = ta.EMA(df[&#39;High&#39;], timeperiod=20), ta.EMA(df[&#39;Low&#39;], timeperiod=20)<br># df[&#39;Donchian_upper&#39;], df[&#39;Donchian_lower&#39;] = ta.MAX(df[&#39;High&#39;], timeperiod=20), ta.MIN(df[&#39;Low&#39;], timeperiod=20)<br># Trend-Based Indicators<br># df[&#39;MA&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=20)<br># df[&#39;EMA&#39;] = ta.EMA(df[&#39;Close&#39;], timeperiod=20)<br>df[&#39;PSAR&#39;] = ta.SAR(df[&#39;High&#39;], df[&#39;Low&#39;], acceleration=0.02, maximum=0.2)<br># Set pandas option to display all columns<br>pd.set_option(&#39;display.max_columns&#39;, None)<br># Displaying the calculated indicators<br>print(df.tail())<br>df.dropna(inplace=True)<br>print(&quot;Length: &quot;, len(df))<br>df</pre><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>This code demonstrates the calculation of various technical indicators using the talib library. The code iterates through different time periods to compute indicators like Average True Range (ATR), Exponential Moving Average (EMA), Relative Strength Index (RSI), and several others. Additionally, it calculates features like returns, range, and volatility to potentially use as input features for machine learning models.</p><p><strong>1. Technical Indicator Calculations:</strong></p><ul><li>The code iterates through two lists, time_periods and name_periods (which seem to have the same values here). This might be a placeholder for using different sets of periods for the indicators in the future.</li><li>Within the loops, it calculates numerous technical indicators for each specified time period (period) using talib functions:</li><li><strong>Average True Range (ATR):</strong> Measures market volatility (df[f&#39;ATR_{period}&#39;]).</li><li><strong>Exponential Moving Average (EMA):</strong> Calculates EMAs with a period twice the loop’s period (df[f&#39;EMA_{period}&#39;]).</li><li><strong>Relative Strength Index (RSI):</strong> Calculates RSI with a period half the loop’s period (df[f&#39;RSI_{period}&#39;]).</li><li><strong>Volume-Weighted Average Price (VWAP):</strong> Calculates VWAP for the period (df[f&#39;VWAP_{period}&#39;]).</li><li><strong>Rate of Change (ROC):</strong> Calculates ROC for the period (df[f&#39;ROC_{period}&#39;]).</li><li><strong>Keltner Channels (KC):</strong> Calculates upper and middle bands based on EMAs of highs and lows (df[f&#39;KC_upper_{period}&#39;], df[f&#39;KC_middle_{period}&#39;]).</li><li><strong>Donchian Channels:</strong> Calculates upper and lower bands based on maximum and minimum highs/lows within the period (df[f&#39;Donchian_upper_{period}&#39;], df[f&#39;Donchian_lower_{period}&#39;]).</li><li><strong>Moving Average Convergence Divergence (MACD):</strong> Calculates MACD and its signal line for the period (df[f&#39;MACD_{period}&#39;], df[f&#39;MACD_signal_{period}&#39;]).</li><li><strong>Bollinger Bands (BB):</strong> Calculates upper, middle, and lower bands for the period (df[f&#39;BB_upper_{period}&#39;], df[f&#39;BB_middle_{period}&#39;], df[f&#39;BB_lower_{period}&#39;]).</li><li><strong>Elliott Wave Oscillator (EWO):</strong> Calculates EWO for the period (df[f&#39;EWO_{period}&#39;]).</li><li><strong>Target Prediction and Feature Engineering:</strong></li><li>The code defines a target_prediction_number (presumably the number of periods ahead you aim to predict).</li><li>It calculates “Returns” as the percentage change in adjusted close prices over the target_prediction_number periods (df[&quot;Returns&quot;]).</li><li>It calculates “Range” as the difference between high and low prices divided by the low price (df[&quot;Range&quot;]).</li><li>It calculates “Volatility” as the rolling standard deviation of returns over the target_prediction_number periods (df[&quot;Volatility&quot;]).</li><li><strong>Additional Indicators:</strong></li><li>The code calculates On-Balance Volume (OBV) and Accumulation Distribution Line (ADL) using talib functions (df[&#39;OBV&#39;], df[&#39;ADL&#39;]).</li><li>It calculates the Stochastic Oscillator using talib (df[&#39;Stoch_Oscillator&#39;]).</li><li>It calculates the Parabolic Stop and Reversal (PSAR) using talib (df[&#39;PSAR&#39;]).</li></ul><h3>Data- Preprocessing — Setting up “Target” value for estimating future predictive values</h3><pre># Target flexible way<br>pipdiff_percentage = 0.01  # 1% (0.01) of the asset&#39;s price for TP<br>SLTPRatio = 2.0  # pipdiff/Ratio gives SL<br>def mytarget(barsupfront, df1):<br>    length = len(df1)<br>    high = list(df1[&#39;High&#39;])<br>    low = list(df1[&#39;Low&#39;])<br>    close = list(df1[&#39;Close&#39;])<br>    open_ = list(df1[&#39;Open&#39;])  # Renamed &#39;open&#39; to &#39;open_&#39; to avoid conflict with Python&#39;s built-in function<br>    trendcat = [None] * length<br>    for line in range(0, length - barsupfront - 2):<br>        valueOpenLow = 0<br>        valueOpenHigh = 0<br>        for i in range(1, barsupfront + 2):<br>            value1 = open_[line + 1] - low[line + i]<br>            value2 = open_[line + 1] - high[line + i]<br>            valueOpenLow = max(value1, valueOpenLow)<br>            valueOpenHigh = min(value2, valueOpenHigh)<br>            if (valueOpenLow &gt;= close[line + 1] * pipdiff_percentage) and (<br>                    -valueOpenHigh &lt;= close[line + 1] * pipdiff_percentage / SLTPRatio):<br>                trendcat[line] = 2  # -1 downtrend<br>                break<br>            elif (valueOpenLow &lt;= close[line + 1] * pipdiff_percentage / SLTPRatio) and (<br>                    -valueOpenHigh &gt;= close[line + 1] * pipdiff_percentage):<br>                trendcat[line] = 1  # uptrend<br>                break<br>            else:<br>                trendcat[line] = 0  # no clear trend<br>return trendcat</pre><p>This code defines a function mytarget that attempts to identify potential trends and set target values accordingly. It calculates the difference between the open price and upcoming highs/lows within a specified timeframe (barsupfront). Based on these differences and thresholds defined by pipdiff_percentage and SLTPRatio, the function classifies the trend as uptrend, downtrend, or no clear trend. These classifications could then be used to set target buy/sell prices in a trading strategy.</p><p><strong>Here’s the breakdown of the code provided:</strong></p><p>The provided code defines a function mytarget that aims to set target values (presumably for buying and selling) based on a trend classification. Here&#39;s a breakdown of its functionality:</p><p><strong>Parameters:</strong></p><ul><li>barsupfront (integer): The number of bars to look ahead from the current bar for trend classification.</li><li>df1 (pandas DataFrame): The DataFrame containing OHLC (Open, High, Low, Close) prices.</li></ul><p><strong>Function Logic:</strong></p><ol><li><strong>Initialization:</strong></li></ol><ul><li>It retrieves the length of the DataFrame (length).</li><li>It extracts lists of high, low, close, and open prices (high, low, close, open_). Note that open is renamed to open_ to avoid conflicts with Python&#39;s built-in open function.</li><li>It initializes a list trendcat with length elements, all set to None, which will eventually hold the trend category (uptrend, downtrend, or no trend) for each bar.</li></ul><p><strong>2. Trend Classification Loop:</strong></p><ul><li>The code iterates through the DataFrame, starting from the barsupfront-th bar to the second-last bar (length - barsupfront - 2).</li><li>Inside the loop:</li><li>It calculates two values:</li><li>valueOpenLow: Maximum difference between the open price at the current bar and the low prices in the next barsupfront + 1 bars.</li><li>valueOpenHigh: Minimum difference between the open price at the current bar and the high prices in the next barsupfront + 1 bars.</li><li>It checks these values against thresholds based on pipdiff_percentage (a percentage of the asset&#39;s price) and SLTPRatio:</li><li>If valueOpenLow is greater than or equal to close[line + 1] * pipdiff_percentage (meaning the open is significantly lower than some of the upcoming lows) AND -valueOpenHigh is less than or equal to close[line + 1] * pipdiff_percentage / SLTPRatio (meaning the open is not significantly higher than some of the upcoming highs), it classifies the trend as downtrend (trendcat[line] is set to 2).</li><li>Conversely, if valueOpenLow is less than or equal to close[line + 1] * pipdiff_percentage / SLTPRatio (meaning the open is significantly higher than some of the upcoming lows) AND -valueOpenHigh is greater than or equal to close[line + 1] * pipdiff_percentage (meaning the open is not significantly lower than some of the upcoming highs), it classifies the trend as uptrend (trendcat[line] is set to 1).</li><li>If neither condition is met, it marks no clear trend (trendcat[line] remains 0).</li></ul><p><strong>3. Return:</strong></p><ul><li>The function returns the trendcat list containing the trend classification for each bar (except the first barsupfront bars).</li><li>pen_spark</li></ul><pre>#!!! pitfall one category high frequency<br>df[&#39;Target&#39;] = mytarget(2, df)<br>df[&#39;Target&#39;] = df[&#39;Target&#39;].shift(1)<br>#df.tail(20)<br>df.replace([np.inf, -np.inf], np.nan, inplace=True)<br>df.dropna(axis=0, inplace=True)<br><br># Convert columns to integer type<br>df = df.astype(int)<br>#df[&#39;Target&#39;] = df[&#39;Target&#39;].astype(int)<br>df[&#39;Target&#39;].hist()<br>count_of_twos_target = df[&#39;Target&#39;].value_counts().get(2, 0)<br>count_of_zeros_target = df[&#39;Target&#39;].value_counts().get(0, 0)<br>count_of_ones_target = df[&#39;Target&#39;].value_counts().get(1, 0)<br>percent_of_zeros_over_ones_and_twos = (100 - (count_of_zeros_target/ (count_of_zeros_target + count_of_ones_target + count_of_twos_target))*100)<br>print(f&#39; count_of_zeros = {count_of_zeros_target}\n count_of_twos_target = {count_of_twos_target}\n count_of_ones_target={count_of_ones_target}\n percent_of_zeros_over_ones_and_twos = {round(percent_of_zeros_over_ones_and_twos,2)}%&#39;)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/373/1*hnmezwvqGGgIlWUVFCut6Q.png" /><figcaption>output of the above code</figcaption></figure><p>After assigning trend classifications (Target) based on the mytarget function, the code performs data cleaning by handling infinities and removing rows with missing values. It then analyzes the distribution of target values using a histogram and calculates the proportion of bars classified as each trend category. This helps assess the balance between clear uptrends, downtrends, and periods with no clear trend in the data.</p><p><strong>1. Assigning Target Values and Shifting:</strong></p><ul><li>The code assigns the output of mytarget(2, df) (presumably trend classifications) to the &#39;Target&#39; column (df[&#39;Target&#39;] = mytarget(2, df)).</li><li>It then shifts the &#39;Target&#39; values by one position upwards (df[&#39;Target&#39;] = df[&#39;Target&#39;].shift(1)) because the trend classification is based on future price movements. This means the target value for bar n is based on the trend classification for bar n-1.</li></ul><p><strong>2. Handling Infinities and Missing Values:</strong></p><ul><li>The code replaces positive and negative infinity (np.inf and -np.inf) with NaN (Not a Number) values in the DataFrame (df.replace([np.inf, -np.inf], np.nan, inplace=True)). This is necessary because some mathematical operations cannot handle infinities.</li><li>It then removes rows with missing values (NaN) from the DataFrame (df.dropna(axis=0, inplace=True)) to ensure clean data for further analysis.</li></ul><p><strong>3. Converting Data Types (Commented Out):</strong></p><ul><li>The line df = df.astype(int) is commented out. This line would attempt to convert all columns in the DataFrame to integers. However, since the &#39;Target&#39; column likely contains categorical values (1, 2, or 0), converting it to integer might not be meaningful. You&#39;d typically only convert numerical columns to integers if necessary for calculations.</li></ul><p><strong>4. Analyzing Target Distribution:</strong></p><ul><li>The code plots a histogram of the &#39;Target&#39; column (df[&#39;Target&#39;].hist()). This helps visualize the distribution of target values (uptrend, downtrend, or no trend) across the data.</li><li>It then calculates the counts of each target value (1, 2, and 0) using value_counts().</li><li>Finally, it calculates the percentage of bars classified as “no trend” relative to the sum of bars classified as uptrend and downtrend (percent_of_zeros_over_ones_and_twos). This provides insights into the balance between clear trends and unclear trends in the data.</li></ul><p>This code segment effectively calculates target categories based on predefined criteria and provides insights into the distribution of these categories within the dataset.</p><h3>Checking if the above Code is Giving Best Possible Returns for the “Target” Data Created:</h3><pre># Check for NaN values:<br>has_nan = df[&#39;Target&#39;].isnull().values.any()<br>print(&quot;NaN values present:&quot;, has_nan)<br><br># Check for infinite values:<br>has_inf = df[&#39;Target&#39;].isin([np.inf, -np.inf]).values.any()<br>print(&quot;Infinite values present:&quot;, has_inf)<br># Count the number of NaN and infinite values:<br>nan_count = df[&#39;Target&#39;].isnull().sum()<br>inf_count = (df[&#39;Target&#39;] == np.inf).sum() + (df[&#39;Target&#39;] == -np.inf).sum()<br>print(&quot;Number of NaN values:&quot;, nan_count)<br>print(&quot;Number of infinite values:&quot;, inf_count)<br># Get the indices of NaN and infinite values:<br>nan_indices = df[&#39;Target&#39;].index[df[&#39;Target&#39;].isnull()]<br>inf_indices = df[&#39;Target&#39;].index[df[&#39;Target&#39;].isin([np.inf, -np.inf])]<br>print(&quot;Indices of NaN values:&quot;, nan_indices)<br>df[&#39;Target&#39;]<br>df = df.reset_index(inplace=False)<br>df[&#39;Date&#39;] = pd.to_datetime(df[&#39;Date&#39;])<br>df.set_index(&#39;Date&#39;, inplace=True)<br>def SIGNAL(df):<br>    return df[&#39;Target&#39;]<br>from backtesting import Strategy<br>class MyCandlesStrat(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1 = self.I(SIGNAL, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        if self.signal1 == 1:<br>            sl_pct = 0.025  # 2.5% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif self.signal1 == 2:<br>            sl_pct = 0.025  # 2.5% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)<br>            <br>bt = Backtest(df, MyCandlesStrat, cash=100000, commission=.001, exclusive_orders = True)<br>stat = bt.run()<br>stat</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/368/1*0kb5rR7CnLf7tani_mNMLQ.png" /><figcaption>output of above code</figcaption></figure><ol><li><strong>Checking for Missing and Infinite Values:</strong></li></ol><ul><li>The code checks for the presence of NaN (Not a Number) and infinite values in the &#39;Target&#39; column (df[&#39;Target&#39;]).</li><li>It then counts the number of occurrences and retrieves the indices of these values.</li><li>These checks are crucial because backtesting libraries typically cannot handle missing or infinite values in signals.</li></ul><p><strong>2. Backtesting Framework Setup:</strong></p><ul><li>The code defines a function SIGNAL(df) that simply returns the &#39;Target&#39; column values. This function essentially provides the buy/sell signals based on the target classifications (1 for uptrend buy, 2 for downtrend sell).</li><li>It imports the Strategy class from the backtesting library.</li><li>It defines a custom strategy class MyCandlesStrat that inherits from Strategy.</li><li>The init method initializes an indicator named signal1 that holds the target values using the I function (presumably from backtesting).</li><li>The next method defines the trading logic:</li><li>If the signal1 is 1 (uptrend), it places a buy order with a stop-loss and take-profit based on percentages of the closing price.</li><li>If the signal1 is 2 (downtrend), it places a sell order with a stop-loss and take-profit based on percentages of the closing price.</li></ul><p><strong>3. Backtesting and Evaluation:</strong></p><ul><li>The code creates a Backtest object using the backtesting library. It provides the DataFrame (df), the strategy class (MyCandlesStrat), initial capital (cash), commission rate (commission), and sets exclusive_orders to True (potentially to prevent overlapping orders).</li><li>It runs the backtest using the bt.run() method and stores the results in the stat variable.</li></ul><p><strong>Does this code definitively determine the effectiveness of the target values?</strong></p><p>No, this code doesn’t definitively determine the effectiveness of the target values. Here’s why:</p><ul><li><strong>Parameter Optimization:</strong> The stop-loss and take-profit percentages (sl_pct and tp_pct) are fixed in the code. Optimizing these parameters for the specific strategy and market conditions could potentially improve performance.</li><li><strong>Single Backtest Run:</strong> Running the backtest only once doesn’t account for the inherent randomness in financial markets. Ideally, you’d run the backtest multiple times with different random seeds to assess its robustness.</li></ul><p><strong>How to improve the code for target evaluation?</strong></p><ul><li><strong>Calculate Performance Metrics:</strong> Modify the code to calculate and print relevant performance metrics like Sharpe Ratio, drawdown, and total profit after the backtest run.</li><li><strong>Optimize Stop-Loss and Take-Profit:</strong> Implement a parameter optimization process to find the best stop-loss and take-profit values for the strategy using the target signals.</li><li><strong>Multiple Backtest Runs:</strong> Run the backtest with different random seeds (e.g., using a loop) and analyze the distribution of performance metrics to assess the strategy’s consistency.</li></ul><p>By incorporating these improvements, wecan gain a more comprehensive understanding of how well the target values from the mytarget function perform in a backtesting framework. Remember, backtesting results are not guarantees of future performance, so real-world testing with a smaller capital allocation is essential before deploying a strategy with real money.</p><h3>Scaling and splitting the dataframe for training and testing:</h3><pre>scaler = MinMaxScaler(feature_range=(0,1))<br><br>df_model = df.copy()<br># Split into Learning (X) and Target (y) Data<br>X = df_model.iloc[:, : -1]<br>y = df_model.iloc[:, -1]<br>X_scaled = scaler.fit_transform(X)<br># Define a function to reshape the data<br>def reshape_data(data, time_steps):<br>    samples = len(data) - time_steps + 1<br>    reshaped_data = np.zeros((samples, time_steps, data.shape[1]))<br>    for i in range(samples):<br>        reshaped_data[i] = data[i:i + time_steps]<br>    return reshaped_data<br># Reshape the scaled X data<br>time_steps = 1  # Adjust the number of time steps as needed<br>X_reshaped = reshape_data(X_scaled, time_steps)<br># Now X_reshaped has the desired three-dimensional shape: (samples, time_steps, features)<br># Each sample contains scaled data for a specific time window<br># Align y with X_reshaped by discarding excess target values<br>y_aligned = y[time_steps - 1:]  # Discard the first (time_steps - 1) target values<br>X = X_reshaped<br>y = y_aligned<br>print(len(X),len(y))<br># Split data into train and test sets (considering time series data)<br>X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=False)</pre><p><strong>1. Data Preparation:</strong></p><ul><li><strong>Copying Data:</strong> It creates a copy of the original DataFrame (df_model = df.copy()) to avoid modifying the original data.</li></ul><p><strong>2. Splitting Features and Target:</strong></p><ul><li><strong>Separating Features (X) and Target (y):</strong> It separates the features (all columns except the last) and the target variable (the last column) using slicing (X = df_model.iloc[:, : -1], y = df_model.iloc[:, -1]).</li></ul><p><strong>3. Scaling Features:</strong></p><ul><li><strong>MinMaxScaler:</strong> It creates a MinMaxScaler object to scale the features between 0 and 1 (scaler = MinMaxScaler(feature_range=(0,1))). This can be helpful for some machine learning algorithms that work better with normalized data.</li><li><strong>Scaling X:</strong> It scales the feature data (X) using the fit_transform method of the scaler (X_scaled = scaler.fit_transform(X)).</li></ul><p><strong>4. Reshaping Data (Windowing):</strong></p><ul><li><strong>Reshape Function:</strong> It defines a function reshape_data that takes the data and the number of time steps (time_steps) as input.</li><li>This function iterates through the data with a sliding window of time_steps and creates a new 3D array (reshaped_data).</li><li>Each element in the new array represents a sample, containing a sequence of time_steps data points for each feature.</li><li><strong>Reshaping Scaled X:</strong> It defines the number of time steps (time_steps) and reshapes the scaled feature data (X_scaled) using the reshape_data function (X_reshaped = reshape_data(X_scaled, time_steps)).</li><li>This step transforms the data into a format suitable for time series forecasting models that require sequences of past observations to predict future values.</li></ul><p><strong>5. Aligning Target with Reshaped Data:</strong></p><ul><li><strong>Discarding Excess Target Values:</strong> Since the reshaped data (X_reshaped) considers a window of time_steps, the corresponding target values need an adjustment. It discards the first time_steps - 1 target values from y to align with the reshaped data (y_aligned = y[time_steps - 1:]).</li></ul><p><strong>6. Final Splitting (Train-Test):</strong></p><ul><li><strong>Train-Test Split:</strong> It splits the reshaped features (X) and aligned target (y) into training and testing sets using train_test_split from scikit-learn (X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=False)).</li><li>It sets test_size=0.3 to allocate 30% of the data for testing and shuffle=False because shuffling data in time series can disrupt the temporal order.</li></ul><p><strong>Overall, this code effectively addresses key aspects of data preparation for time series forecasting models:</strong></p><ul><li>Scaling features to a common range can improve model performance for some algorithms.</li><li>Reshaping data into a 3D structure with time steps allows models to learn from sequences of past observations.</li><li>Aligning the target variable with the reshaped data ensures the model predicts for the correct time steps.</li><li>Splitting data into training and testing sets with shuffle=False preserves the temporal order for time series forecasting.</li></ul><p><strong>Additional Considerations:</strong></p><ul><li>The choice of scaler (MinMaxScaler, StandardScaler, etc.) might depend on the specific model and data characteristics.</li><li>You might explore different window sizes (time_steps) to see how they affect model performance.</li><li>Techniques like stationarity checks and differencing might be necessary for certain time series data before applying these steps.</li></ul><h3>LSTM Model Manual Optimization</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6azJsVI3fbuivDoR.png" /><figcaption>LSTM Classification time series model</figcaption></figure><pre>from keras.layers import Input, Dense, Dropout<br>from keras.models import Model<br>from keras.optimizers import Adam<br>from keras.layers import LSTM<br>from keras.metrics import Precision, Recall<br>from keras.utils import to_categorical<br><br>class_weights = {0: 3.33, 1: 3.33, 2: 3.34}  # Adjust weights as needed<br><br><br># Define LSTM model<br>def build_lstm_model(input_shape, units=193, dropout=0.2, lr=0.00002):<br>    inputs = Input(shape=input_shape)<br>    lstm_layer = LSTM(units=units, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)(inputs)<br>    lstm_layer_2 = LSTM(units=128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)(lstm_layer)<br>    lstm_layer_3 = LSTM(units=96, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)(lstm_layer_2)<br>    lstm_layer_4 = LSTM(units=48, dropout=0.2, recurrent_dropout=0.2)(lstm_layer_3)<br>    dense_layer = Dense(units=32, activation=&#39;relu&#39;)(lstm_layer_4)<br>    dropout_layer = Dropout(dropout)(dense_layer)<br>    outputs = Dense(3, activation=&#39;softmax&#39;)(dropout_layer)<br>    model = Model(inputs=inputs, outputs=outputs)<br>    optimizer = Adam(learning_rate=lr)<br>    model.compile(optimizer=optimizer, loss=&#39;categorical_crossentropy&#39;, metrics=[Precision(), &#39;accuracy&#39;, Recall()])<br>    return model<br><br># Convert y_train to one-hot encoded format<br>y_train_one_hot = to_categorical(y_train, num_classes=3)<br><br># Instantiate the model<br>model_lstm = build_lstm_model(input_shape=(X_train.shape[1], X_train.shape[2]))<br><br># Fit the model to the training data<br># model_lstm.fit(X_train, y_train_one_hot, epochs=150, batch_size=18, validation_split=0.2, verbose=1, class_weight=class_weights)<br>model_lstm.fit(X_train, y_train_one_hot, epochs=100, batch_size=18, validation_split=0.2, verbose=1)</pre><p>This code builds and trains an LSTM (Long Short-Term Memory) neural network model to classify time series data into three categories: neutral (0), long (1), and short (2). The dataset comprises 100,000 rows of Ethereum (ETH) data with a 15-minute timeframe and 193 features.</p><p>Here’s a step-by-step explanation of the code:</p><h4>Imports and Setup</h4><pre>from keras.layers import Input, Dense, Dropout, LSTM<br>from keras.models import Model<br>from keras.optimizers import Adam<br>from keras.metrics import Precision, Recall<br>from keras.utils import to_categorical</pre><p>These lines import necessary components from Keras for building and training the LSTM model:</p><ul><li><strong>Input</strong>: To define the input layer.</li><li><strong>Dense</strong>: For fully connected layers.</li><li><strong>Dropout</strong>: To prevent overfitting by randomly setting a fraction of input units to 0 at each update during training.</li><li><strong>LSTM</strong>: For LSTM layers, which are used to capture temporal dependencies in the data.</li><li><strong>Model</strong>: To create the model.</li><li><strong>Adam</strong>: Optimizer for updating model parameters.</li><li><strong>Precision, Recall</strong>: Metrics to evaluate model performance.</li><li><strong>to_categorical</strong>: To convert class vectors (integers) to binary class matrices.</li></ul><h4>Class Weight</h4><pre>class_weights = {0: 3.33, 1: 3.33, 2: 3.34}</pre><p>These weights are used to handle class imbalance by giving more importance to underrepresented classes during training. Adjust these values as needed based on the distribution of your dataset.</p><h4>Define the LSTM Model</h4><pre>def build_lstm_model(input_shape, units=193, dropout=0.2, lr=0.00002):<br>    inputs = Input(shape=input_shape)<br>    lstm_layer = LSTM(units=units, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)(inputs)<br>    lstm_layer_2 = LSTM(units=128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)(lstm_layer)<br>    lstm_layer_3 = LSTM(units=96, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)(lstm_layer_2)<br>    lstm_layer_4 = LSTM(units=48, dropout=0.2, recurrent_dropout=0.2)(lstm_layer_3)<br>    dense_layer = Dense(units=32, activation=&#39;relu&#39;)(lstm_layer_4)<br>    dropout_layer = Dropout(dropout)(dense_layer)<br>    outputs = Dense(3, activation=&#39;softmax&#39;)(dropout_layer)<br>    model = Model(inputs=inputs, outputs=outputs)<br>    optimizer = Adam(learning_rate=lr)<br>    model.compile(optimizer=optimizer, loss=&#39;categorical_crossentropy&#39;, metrics=[Precision(), &#39;accuracy&#39;, Recall()])<br>    return model</pre><p>This function builds the LSTM model:</p><ol><li><strong>Inputs</strong>: Defines the input shape based on the training data.</li><li><strong>LSTM Layers</strong>: Four LSTM layers with decreasing units to capture sequential dependencies.</li></ol><ul><li>return_sequences=True is used in the first three LSTM layers to return the full sequence of outputs.</li><li>Dropout and recurrent dropout are used to prevent overfitting.</li></ul><p><strong>3. Dense Layer</strong>: A fully connected layer with ReLU activation.</p><p><strong>4. Dropout Layer</strong>: Added after the dense layer to further reduce overfitting.</p><p><strong>5. Output Layer</strong>: A dense layer with softmax activation to output probabilities for the three classes.</p><p><strong>6. Compile Model</strong>: The model is compiled with the Adam optimizer, categorical cross-entropy loss, and metrics for precision, accuracy, and recall.</p><h4>Convert Labels to One-Hot Encoding</h4><pre>y_train_one_hot = to_categorical(y_train, num_classes=3)</pre><p>This line converts the target variable y_train into a one-hot encoded format, which is required for the categorical cross-entropy loss function.</p><h4>Instantiate and Train the Model</h4><pre>model_lstm = build_lstm_model(input_shape=(X_train.shape[1], X_train.shape[2]))<br><br>model_lstm.fit(X_train, y_train_one_hot, epochs=100, batch_size=18, validation_split=0.2, verbose=1, class_weight=class_weights)</pre><ol><li><strong>Instantiate the Model</strong>: The build_lstm_model function is called with the input shape derived from the training data.</li><li><strong>Train the Model</strong>: The fit method trains the model on the training data.</li></ol><ul><li>epochs=100: Number of epochs for training.</li><li>batch_size=18: Number of samples per gradient update.</li><li>validation_split=0.2: Fraction of the training data to be used as validation data.</li><li>verbose=1: Verbosity mode (progress bar).</li><li>class_weight=class_weights: To handle class imbalance during training.</li></ul><h4>Why LSTM is Better than 1D CNN for Time Series Data</h4><ol><li><strong>Sequential Nature</strong>: LSTMs are specifically designed to handle sequential data and can capture temporal dependencies better than 1D CNNs, which are more suited for spatial data.</li><li><strong>Memory Capability</strong>: LSTMs have a memory cell that can remember previous inputs, making them more effective for time series data where past values influence future values.</li><li><strong>Handling Long Sequences</strong>: LSTMs can handle longer sequences due to their gating mechanisms, whereas 1D CNNs might struggle with long-term dependencies.</li></ol><h4>Summary</h4><p>This LSTM model is tailored to classify ETH time series data into three categories: neutral, long, and short. The use of multiple LSTM layers helps capture complex temporal dependencies in the data, making it well-suited for this classification task. The model is trained with appropriate class weights to handle imbalance, and its performance is evaluated using precision, accuracy, and recall metrics.</p><pre>from sklearn.metrics import confusion_matrix<br>import matplotlib.pyplot as plt<br>import seaborn as sns<br><br># # Reshape X_train and X_test back to their original shapes<br># X_train_original_shape = X_train.reshape(X_train.shape[0], -1)<br># X_test_original_shape = X_test.reshape(X_test.shape[0], -1)<br><br># X_test_reshaped = X_test_original_shape.reshape(-1, 1, X_test_original_shape.shape[1])<br><br><br># Now X_train_original_shape and X_test_original_shape have their original shapes<br><br># Perform prediction on the original shape data<br># y_pred = model.predict(X_test_reshaped)<br>y_pred = model_lstm.predict(X_test)<br><br><br># Perform any necessary post-processing on y_pred if needed<br># For example, if your model outputs probabilities, you might convert them to class labels using argmax:<br><br>y_pred_classes = np.argmax(y_pred, axis=1)<br><br># Convert one-hot encoded y_test to class labels<br>y_test_classes = y_test<br><br># Plot confusion matrix for test data<br>conf_matrix_test = confusion_matrix(y_test_classes, y_pred_classes)<br><br># Plot confusion matrix<br>plt.figure(figsize=(8, 6))<br>sns.heatmap(conf_matrix_test, annot=True, cmap=&#39;Blues&#39;, fmt=&#39;g&#39;, cbar=False)<br>plt.xlabel(&#39;Predicted labels&#39;)<br>plt.ylabel(&#39;True labels&#39;)<br>plt.title(&#39;Confusion Matrix - Test Data&#39;)<br>plt.show()<br><br>from sklearn.metrics import classification_report<br><br># Generate classification report for test data<br>class_report = classification_report(y_test, y_pred_classes)<br><br># Print classification report<br>print(&quot;Classification Report - Test Data:\n&quot;, class_report)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/573/1*CNpXmK8AH0V1dF6sc_gwxg.png" /></figure><p><strong>1. Imports:</strong></p><ul><li>confusion_matrix from sklearn.metrics for calculating the confusion matrix.</li><li>matplotlib.pyplot (plt) and seaborn (sns) for creating the confusion matrix visualization.</li><li>classification_report from sklearn.metrics for generating a classification report.</li></ul><p><strong>2. Reshaping Data (Commented Out):</strong></p><ul><li>The commented section addresses potential reshaping issues. It’s important to ensure your test data (X_test) has the correct shape expected by the model for prediction.</li></ul><p><strong>3. Prediction:</strong></p><ul><li>y_pred = model_lstm.predict(X_test) performs predictions on the test data using your trained model.</li></ul><p><strong>4. Post-processing Predictions:</strong></p><ul><li>y_pred_classes = np.argmax(y_pred, axis=2) assumes your model outputs probabilities for each class (neutral, long, short). This line converts the probabilities to class labels by using argmax (finding the index of the maximum value) along axis 2.</li></ul><p><strong>5. Converting True Labels:</strong></p><ul><li>y_test_classes = y_test assumes your y_test data already contains class labels (0, 1, 2) for the test set.</li></ul><p><strong>6. Confusion Matrix:</strong></p><ul><li>conf_matrix_test = confusion_matrix(y_test_classes, y_pred_classes) calculates the confusion matrix for the test data. It shows how many samples from each true class were predicted into each class by the model.</li></ul><p><strong>7. Visualization:</strong></p><ul><li>The code creates a heatmap visualization of the confusion matrix using seaborn. This allows you to visually inspect how well the model classified each class. Ideally, you want to see high values on the diagonal, indicating correct classifications.</li></ul><p><strong>8. Classification Report:</strong></p><ul><li>class_report = classification_report(y_test, y_pred_classes) generates a classification report for the test data. This report provides metrics like precision, recall, F1-score, and support for each class, offering a more detailed breakdown of the model&#39;s performance.</li><li>pen_spark</li></ul><h4>Backtest with Test and Whole Data:</h4><pre>df_ens_test = df.copy() <br><br>df_ens = df_ens_test[len(X_train):]<br><br>df_ens[&#39;lstm_neural_scaled&#39;] =  np.argmax(model_lstm.predict(X_test), axis=1)<br><br>df_ens[&#39;lns&#39;] = df_ens[&#39;lstm_neural_scaled&#39;].shift(1).dropna().astype(int)<br><br>df_ens = df_ens.dropna()<br><br>df_ens[&#39;lns&#39;]<br><br># df_ens = df.copy() <br><br># # df_ens = df_ens_test[len(X_train):]<br><br># df_ens[&#39;lstm_neural_scaled&#39;] =  np.argmax(model_lstm.predict(X), axis=1)<br><br># df_ens[&#39;lns&#39;] = df_ens[&#39;lstm_neural_scaled&#39;].shift(-1).dropna().astype(int)<br><br># df_ens = df_ens.dropna()<br><br># df_ens[&#39;lns&#39;]<br><br>df_ens = df_ens.reset_index(inplace=False)<br>df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>df_ens.set_index(&#39;Date&#39;, inplace=True)<br><br>def SIGNAL_3(df_ens):<br>    return df_ens[&#39;lns&#39;]<br><br>class MyCandlesStrat_3(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1_1 = self.I(SIGNAL_3, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        if self.signal1_1 == 1:<br>            sl_pct = 0.055  # 10% stop-loss<br>            tp_pct = 0.055  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif self.signal1_1 == 2:<br>            sl_pct = 0.055  # 10% stop-loss<br>            tp_pct = 0.055  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)<br><br>            <br>bt_3 = Backtest(df_ens, MyCandlesStrat_3, cash=100000, commission=.001, exclusive_orders=False)<br>stat_3 = bt_3.run()<br>stat_3<br></pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/487/1*F4hGhe7srBA0KRs06TNiGw.png" /><figcaption>backtest results for lstm classification time series model for crypto eth 15m timeframe 1000+ days result</figcaption></figure><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>This code takes the predictions from an LSTM model and uses them to generate trading signals for backtesting a trading strategy on Ethereum (ETH) time series data.</p><h4>Data Preparation</h4><pre>df_ens_test = df.copy()<br>df_ens = df_ens_test[len(X_train):]</pre><ul><li><strong>df_ens_test</strong>: Creates a copy of the original DataFrame df.</li><li><strong>df_ens</strong>: Extracts a subset of df_ens_test starting from the length of the training data X_train. This assumes that X_train is the training dataset and the rest is for testing/validation.</li></ul><pre>df_ens[&#39;lstm_neural_scaled&#39;] = np.argmax(model_lstm.predict(X_test), axis=1)</pre><ul><li><strong>lstm_neural_scaled</strong>: Uses the trained LSTM model to make predictions on the test data X_test. The np.argmax function converts the predicted probabilities into class labels (0, 1, or 2).</li></ul><pre>df_ens[&#39;lns&#39;] = df_ens[&#39;lstm_neural_scaled&#39;].shift(1).dropna().astype(int)<br>df_ens = df_ens.dropna()</pre><ul><li><strong>lns</strong>: Shifts the predictions by one time step to align the signals with the corresponding trading periods. dropna removes any rows with NaN values, ensuring that the shifted column aligns correctly.</li></ul><pre>df_ens = df_ens.reset_index(inplace=False)<br>df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>df_ens.set_index(&#39;Date&#39;, inplace=True)</pre><ul><li><strong>Reset and set index</strong>: Resets the index of the DataFrame and sets the Date column as the index. This ensures the DataFrame is correctly indexed by date.</li></ul><h4>Trading Signal and Strategy</h4><pre>def SIGNAL_3(df_ens):<br>    return df_ens[&#39;lns&#39;]</pre><ul><li><strong>SIGNAL_3</strong>: A function that returns the shifted predictions to be used as trading signals.</li></ul><pre>class MyCandlesStrat_3(Strategy):<br>    def init(self):<br>        super().init()<br>        self.signal1_1 = self.I(SIGNAL_3, self.data)<br>    def next(self):<br>        super().next()<br>        if self.signal1_1 == 1:<br>            sl_pct = 0.055<br>            tp_pct = 0.055<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif self.signal1_1 == 2:<br>            sl_pct = 0.055<br>            tp_pct = 0.055<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)</pre><ul><li><strong>MyCandlesStrat_3</strong>: A trading strategy class that implements the signals from SIGNAL_3.</li><li><strong>init</strong>: Initializes the strategy by storing the signal.</li><li><strong>next</strong>: Executes trades based on the signal. If the signal is 1 (long), it places a buy order with a stop-loss (SL) and take-profit (TP). If the signal is 2 (short), it places a sell order with SL and TP.</li></ul><h4>Backtesting</h4><pre>bt_3 = Backtest(df_ens, MyCandlesStrat_3, cash=100000, commission=.001, exclusive_orders=False)<br>stat_3 = bt_3.run()<br>stat_3</pre><ul><li><strong>Backtest</strong>: Runs the backtest using the Backtest class from the backtesting library.</li><li>cash=100000: Initial capital.</li><li>commission=.001: Trading commission.</li><li>exclusive_orders=False: Allows overlapping orders.</li><li><strong>stat_3</strong>: Stores the backtest results.</li></ul><h3>Explanation of the Results</h3><p>The backtest results provide various metrics that evaluate the performance of the trading strategy:</p><ul><li><strong>Start</strong>: The start date of the backtest.</li><li><strong>End</strong>: The end date of the backtest.</li><li><strong>Duration</strong>: Total duration of the backtest period.</li><li><strong>Exposure Time [%]</strong>: Percentage of time the strategy was active in the market.</li><li><strong>Equity Final [$]</strong>: Final equity at the end of the backtest.</li><li><strong>Equity Peak [$]</strong>: Maximum equity value reached.</li><li><strong>Return [%]</strong>: Total return percentage.</li><li><strong>Buy &amp; Hold Return [%]</strong>: Return if the asset was simply held during the period.</li><li><strong>Return (Ann.) [%]</strong>: Annualized return percentage.</li><li><strong>Volatility (Ann.) [%]</strong>: Annualized volatility percentage.</li><li><strong>Sharpe Ratio</strong>: Risk-adjusted return measure (higher is better).</li><li><strong>Sortino Ratio</strong>: Similar to Sharpe Ratio but penalizes downside volatility (higher is better).</li><li><strong>Calmar Ratio</strong>: Annualized return divided by the maximum drawdown (higher is better).</li><li><strong>Max. Drawdown [%]</strong>: Maximum observed loss from a peak.</li><li><strong>Avg. Drawdown [%]</strong>: Average drawdown during the period.</li><li><strong>Max. Drawdown Duration</strong>: Longest duration of a drawdown period.</li><li><strong>Avg. Drawdown Duration</strong>: Average duration of drawdown periods.</li><li><strong># Trades</strong>: Number of trades executed.</li><li><strong>Win Rate [%]</strong>: Percentage of profitable trades.</li><li><strong>Best Trade [%]</strong>: Percentage return of the best trade.</li><li><strong>Worst Trade [%]</strong>: Percentage loss of the worst trade.</li><li><strong>Avg. Trade [%]</strong>: Average return per trade.</li><li><strong>Max. Trade Duration</strong>: Longest duration a trade was held.</li><li><strong>Avg. Trade Duration</strong>: Average duration trades were held.</li><li><strong>Profit Factor</strong>: Ratio of gross profits to gross losses.</li><li><strong>Expectancy [%]</strong>: Expected return per trade.</li><li><strong>SQN</strong>: System Quality Number, a measure of strategy performance.</li></ul><h3>Key Takeaways from the Results</h3><ol><li><strong>Return and Equity</strong>: The strategy achieved a 41.35% return over the backtest period, ending with $141,347.90 in equity from an initial $100,000.</li><li><strong>Volatility and Drawdowns</strong>: The strategy experienced significant volatility (72.45% annualized) with a maximum drawdown of -25.33%.</li><li><strong>Performance Metrics</strong>: The Sharpe Ratio of 0.71 and Sortino Ratio of 1.76 indicate a decent risk-adjusted return, though not exceptional.</li><li><strong>Trading Activity</strong>: The strategy executed 144 trades with a win rate of 48.61%. The average trade resulted in a slight loss (-0.22%).</li><li><strong>Risk Management</strong>: The use of stop-loss and take-profit mechanisms helped manage risk but also resulted in mixed trade outcomes.</li></ol><p>In summary, while the strategy achieved a positive return, it faced high volatility and drawdowns. Continuous optimization and risk management are necessary to improve its performance.</p><h3>Save the Model:</h3><pre>from keras.models import save_model<br>import time<br># save the lstm hyperopt model<br>model_lstm.save(f&quot;model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_{round(stat_3[&#39;Sharpe Ratio&#39;],2)}_time_{time.strftime(&#39;%Y%m%d%H%M%S&#39;)}.keras&quot;)</pre><p><strong>Explanation:</strong></p><ol><li><strong>Import:</strong></li></ol><ul><li>save_model from keras.models is used to save the model.</li></ul><p><strong>Key Points:</strong></p><ul><li>This approach provides a clear and informative way to save our model, including details about its training parameters, data, and performance.</li><li>You can modify the filename structure to include additional information relevant to your needs.</li></ul><h4>Let’s Backtest entire data with saved model:</h4><pre>from keras.models import load_model<br><br># # Load the ensemble_predict function using joblib<br>best_model = load_model(&#39;./model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&#39;)</pre><p><strong>Intended Functionality:</strong></p><ol><li><strong>Import:</strong></li></ol><ul><li>load_model from keras.models is used to load a saved model.</li></ul><p><strong>2. Loading the Model:</strong></p><ul><li>best_model = load_model(&#39;./model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&#39;): This line attempts to load a model saved with the filename model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras from the directory ./models/.</li></ul><pre>df_ens = df.copy() <br><br># df_ens = df_ens_test[:len(X)]<br>y_pred = best_model.predict(X)<br><br># Perform any necessary post-processing on y_pred if needed<br># For example, if your model outputs probabilities, you might convert them to class labels using argmax:<br># y_pred_classes = np.argmax(y_pred, axis=1)<br>y_pred = np.argmax(y_pred, axis=1) # for lstm, tcn, cnn models<br># y_pred = np.argmax(y_pred, axis=2) # for transformers model<br>df_ens[&#39;best_model&#39;] =  y_pred<br>df_ens[&#39;bm&#39;] = df_ens[&#39;best_model&#39;].shift(1).dropna().astype(int)<br>df_ens[&#39;ema_22&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=22)<br>df_ens[&#39;ema_55&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=55)<br>df_ens[&#39;ema_108&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=108)<br>df_ens = df_ens.dropna()<br>df_ens[&#39;bm&#39;]<br>df_ens = df_ens.reset_index(inplace=False)<br>df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>df_ens.set_index(&#39;Date&#39;, inplace=True)<br>def SIGNAL_010(df_ens):<br>    return df_ens[&#39;bm&#39;]<br>def SIGNAL_0122(df_ens):<br>    return df_ens[&#39;ema_22&#39;]<br>def SIGNAL_0155(df_ens):<br>    return df_ens[&#39;ema_55&#39;]<br>def SIGNAL_01108(df_ens):<br>    return df_ens[&#39;ema_108&#39;]<br>class MyCandlesStrat_010(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1_1 = self.I(SIGNAL_010, self.data)<br>        self.ema_1_22 = self.I(SIGNAL_0122, self.data)<br>        self.ema_1_55 = self.I(SIGNAL_0155, self.data)<br>        self.ema_1_108 = self.I(SIGNAL_01108, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        # if (self.signal1_1 == 1) and (self.data.Close &gt; self.ema_1_22) and (self.ema_1_22 &gt; self.ema_1_55) and (self.ema_1_55 &gt; self.ema_1_108):<br>        #     sl_pct = 0.025  # 10% stop-loss<br>        #     tp_pct = 0.025  # 2.5% take-profit<br>        #     sl_price = self.data.Close[-1] * (1 - sl_pct)<br>        #     tp_price = self.data.Close[-1] * (1 + tp_pct)<br>        #     self.buy(sl=sl_price, tp=tp_price)<br>        # elif (self.signal1_1 == 2)  and (self.data.Close &lt; self.ema_1_22) and (self.ema_1_22 &lt; self.ema_1_55) and (self.ema_1_55 &lt; self.ema_1_108):<br>        #     sl_pct = 0.025  # 10% stop-loss<br>        #     tp_pct = 0.025  # 2.5% take-profit<br>        #     sl_price = self.data.Close[-1] * (1 + sl_pct)<br>        #     tp_price = self.data.Close[-1] * (1 - tp_pct)<br>        #     self.sell(sl=sl_price, tp=tp_price)<br>            <br>    # def next(self):<br>    #     super().next() <br>    #     if (self.signal1_1 == 1) and (self.ema_1_22 &gt; self.ema_1_55) and (self.ema_1_55 &gt; self.ema_1_108):<br>    #         sl_pct = 0.025  # 10% stop-loss<br>    #         tp_pct = 0.025  # 2.5% take-profit<br>    #         sl_price = self.data.Close[-1] * (1 - sl_pct)<br>    #         tp_price = self.data.Close[-1] * (1 + tp_pct)<br>    #         self.buy(sl=sl_price, tp=tp_price)<br>    #     elif (self.signal1_1 == 2) and (self.ema_1_22 &lt; self.ema_1_55) and (self.ema_1_55 &lt; self.ema_1_108):<br>    #         sl_pct = 0.025  # 10% stop-loss<br>    #         tp_pct = 0.025  # 2.5% take-profit<br>    #         sl_price = self.data.Close[-1] * (1 + sl_pct)<br>    #         tp_price = self.data.Close[-1] * (1 - tp_pct)<br>    #         self.sell(sl=sl_price, tp=tp_price)<br>            <br>        if (self.signal1_1 == 1):<br>            sl_pct = 0.035  # 10% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif (self.signal1_1 == 2):<br>            sl_pct = 0.035  # 10% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)<br>            <br>bt_010 = Backtest(df_ens, MyCandlesStrat_010, cash=100000, commission=.001)<br>stat_010 = bt_010.run()<br>stat_010</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/461/1*02U3WiCOs5BAF0cmaP2SBQ.png" /><figcaption>720%+ returns for ETh in 1022 days using Neural Networks LSTM Model with VishvaAlgo</figcaption></figure><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>This code builds upon your previous strategy by incorporating a LSTM model prediction (&#39;best_model&#39;) along with Exponential Moving Averages (EMAs) to generate buy and sell signals for a backtesting strategy. Here&#39;s a breakdown:</p><p><strong>1. Data Preparation:</strong></p><ul><li>df_ens = df.copy(): Creates a copy of the original DataFrame (df).</li><li>y_pred = best_model.predict(X): Makes predictions on the entire DataFrame (X) using your loaded LSTM model (best_model).</li><li>df_ens[&#39;best_model&#39;] = y_pred: Adds a new column &#39;best_model&#39; to the DataFrame containing the model predictions.</li><li>df_ens[&#39;bm&#39;] = df_ens[&#39;best_model&#39;].shift(1).dropna().astype(int): Similar to before, this creates a shifted signal column &#39;bm&#39; based on the predicted labels, but here it might include predictions for the entire DataFrame.</li><li>df_ens[&#39;ema_22&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=22): Calculates the 22-period EMA for the &#39;Close&#39; price and adds it as a new column &#39;ema_22&#39;.</li><li>df_ens[&#39;ema_55&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=55): Similar to above, calculates the 55-period EMA and adds it as &#39;ema_55&#39;.</li><li>df_ens[&#39;ema_108&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=108): Calculates the 108-period EMA and adds it as &#39;ema_108&#39;.</li><li>df_ens = df_ens.dropna(): Removes rows with missing values (likely the first row due to shifting).</li></ul><p><strong>2. Signal Functions (Outside the Code Block):</strong></p><ul><li>These functions (SIGNAL_010, SIGNAL_0122, etc.) simply return the corresponding columns from the DataFrame (&#39;bm&#39;, &#39;ema_22&#39;, etc.) used for generating the signals.</li></ul><p><strong>3. Backtesting Strategy Class (</strong><strong>MyCandlesStrat_010):</strong></p><ul><li>Inherits from Strategy.</li><li>def init(self): Initializes indicators for the LSTM model predictions (self.signal1_1) and EMAs (self.ema_1_22, etc.).</li></ul><p><strong>4. Backtesting Logic (in </strong><strong>next function):</strong></p><ul><li>The commented-out section shows a more complex logic considering the relationship between the LSTM predictions and the EMAs for buy/sell decisions.</li><li>The current active section uses a simpler approach:</li><li>If self.signal1_1 (LSTM prediction) is 1 (long):</li><li>Buy with stop-loss (SL) at 3.5% below current close and take-profit (TP) at 2.5% above.</li><li>If self.signal1_1 is 2 (short):</li><li>Sell with SL at 3.5% above current close and TP at 2.5% below.</li></ul><p><strong>5. Backtesting and Results:</strong></p><ul><li>bt_010 = Backtest(df_ens, MyCandlesStrat_010, cash=100000, commission=.001): Creates a backtest object using the DataFrame, strategy class, and other parameters.</li><li>stat_010 = bt_010.run(): Runs the backtest and stores the results in stat_010.</li><li>stat_010: This variable likely contains the backtesting statistics you can analyze.</li></ul><h4>Explanation of the Updated Results</h4><p>The updated backtest results provide various metrics that evaluate the performance of the trading strategy over the specified period:</p><ul><li><strong>Start</strong>: 2021–01–06 05:30:00</li><li><strong>End</strong>: 2023–10–22 15:30:00</li><li><strong>Duration</strong>: 1019 days 10:00:00</li><li><strong>Exposure Time [%]</strong>: 77.46% — The percentage of time the strategy was actively holding positions in the market.</li><li><strong>Equity Final [$]</strong>: $820,380.357 — The final equity at the end of the backtest.</li><li><strong>Equity Peak [$]</strong>: $1,198,316.882 — The highest equity value reached during the backtest.</li><li><strong>Return [%]</strong>: 720.38% — The total return percentage over the backtest period.</li><li><strong>Buy &amp; Hold Return [%]</strong>: 47.19% — The return if the asset was simply held during the period.</li><li><strong>Return (Ann.) [%]</strong>: 110.58% — The annualized return percentage.</li><li><strong>Volatility (Ann.) [%]</strong>: 166.00% — The annualized volatility percentage.</li><li><strong>Sharpe Ratio</strong>: 0.67 — The risk-adjusted return measure (higher is better).</li><li><strong>Sortino Ratio</strong>: 2.72 — Similar to Sharpe Ratio but penalizes downside volatility (higher is better).</li><li><strong>Calmar Ratio</strong>: 2.37 — Annualized return divided by the maximum drawdown (higher is better).</li><li><strong>Max. Drawdown [%]</strong>: -46.70% — The maximum observed loss from a peak.</li><li><strong>Avg. Drawdown [%]</strong>: -4.32% — The average drawdown during the period.</li><li><strong>Max. Drawdown Duration</strong>: 277 days 21:00:00 — The longest duration of a drawdown period.</li><li><strong>Avg. Drawdown Duration</strong>: 5 days 16:57:00 — The average duration of drawdown periods.</li><li><strong># Trades</strong>: 1910 — The number of trades executed.</li><li><strong>Win Rate [%]</strong>: 63.66% — The percentage of profitable trades.</li><li><strong>Best Trade [%]</strong>: 2.49% — The percentage return of the best trade.</li><li><strong>Worst Trade [%]</strong>: -3.69% — The percentage loss of the worst trade.</li><li><strong>Avg. Trade [%]</strong>: 0.19% — The average return per trade.</li><li><strong>Max. Trade Duration</strong>: 15 days 21:30:00 — The longest duration a trade was held.</li><li><strong>Avg. Trade Duration</strong>: 0 days 13:33:00 — The average duration trades were held.</li><li><strong>Profit Factor</strong>: 1.18 — The ratio of gross profits to gross losses.</li><li><strong>Expectancy [%]</strong>: 0.23% — The expected return per trade.</li><li><strong>SQN</strong>: 1.24 — System Quality Number, a measure of strategy performance.</li></ul><h4>Key Takeaways from the Updated Results</h4><ol><li><strong>High Return</strong>: The strategy achieved a 720.38% return over the backtest period, significantly outperforming the buy-and-hold return of 47.19%.</li><li><strong>Volatility and Drawdowns</strong>: The strategy experienced high volatility (166% annualized) and a substantial maximum drawdown of -46.70%. However, it managed to recover and end with a significant profit.</li><li><strong>Performance Metrics</strong>: The Sharpe Ratio of 0.67 and Sortino Ratio of 2.72 indicate a decent risk-adjusted return, with the Sortino Ratio showing better performance when focusing on downside risk.</li><li><strong>Trading Activity</strong>: The strategy executed a large number of trades (1910) with a win rate of 63.66%. The average trade resulted in a small but positive return (0.19%).</li><li><strong>Risk Management</strong>: The use of stop-loss and take-profit mechanisms helped manage risk and contributed to a positive overall expectancy (0.23%).</li></ol><p>In summary, the strategy demonstrated strong performance with a high return, but it also encountered significant volatility and drawdowns. Continuous optimization and risk management are essential to maintain and improve its performance.</p><h3>Conclusion for LSTM Time Series Classification Model</h3><p>The LSTM time series classification model demonstrated effective performance in classifying Ethereum price movements into neutral (0), long (1), and short (2) categories. Below are the key takeaways and conclusions from the implementation and backtesting of the model:</p><h4>Key Takeaways</h4><ol><li><strong>Model Architecture</strong>:</li></ol><ul><li>The LSTM model was constructed with four layers of LSTM units, each with dropout and recurrent dropout for regularization.</li><li>Dense layers with ReLU activation and dropout were added to capture complex patterns in the data.</li><li>The model was compiled using the Adam optimizer and trained with categorical cross-entropy loss, precision, accuracy, and recall as metrics</li></ul><p><strong>2. Class Balancing</strong>:</p><ul><li>Class weights were applied to address the imbalance in the dataset, ensuring that each class was given appropriate importance during training.</li></ul><p><strong>3. Prediction and Signal Generation</strong>:</p><ul><li>The model’s predictions on the test dataset were shifted to align the signals with the corresponding trading periods.</li><li>Trading signals were generated based on the model’s predictions and used to drive a trading strategy.</li></ul><p><strong>4. Trading Strategy</strong>:</p><ul><li>The trading strategy used the LSTM model’s predictions to execute trades with defined stop-loss and take-profit levels.</li><li>The strategy was backtested on a substantial dataset, and various performance metrics were recorded.</li></ul><h4>Performance Summary</h4><ul><li><strong>Total Return</strong>: The strategy achieved a total return of 720.38%, significantly outperforming the buy-and-hold return of 47.19%.</li><li><strong>Annualized Return</strong>: The annualized return was 110.58%, indicating strong performance over the test period.</li><li><strong>Volatility</strong>: The strategy experienced high annualized volatility at 166.00%, reflecting the inherent risks of cryptocurrency trading.</li><li><strong>Drawdowns</strong>: The maximum drawdown was -46.70%, with an average drawdown of -4.32%. Managing these drawdowns is crucial for long-term success.</li><li><strong>Sharpe Ratio</strong>: The Sharpe Ratio of 0.67 indicates a reasonable risk-adjusted return.</li><li><strong>Sortino Ratio</strong>: The Sortino Ratio of 2.72 shows better performance when focusing on downside risk.</li><li><strong>Trade Metrics</strong>: With 1910 trades executed, the strategy had a win rate of 63.66% and an average trade duration of 0 days 13:33:00. The profit factor was 1.18, indicating a positive overall outcome.</li></ul><h4>Conclusion</h4><p>The LSTM time series classification model has proven to be a valuable tool for predicting Ethereum price movements and generating profitable trading signals. Despite the high volatility and substantial drawdowns, the model’s robust returns and positive expectancy demonstrate its potential in algorithmic trading.</p><p>However, there are areas for further improvement and optimization:</p><ol><li><strong>Risk Management</strong>: Implementing advanced risk management techniques could help mitigate drawdowns and volatility.</li><li><strong>Model Optimization</strong>: Continuous refinement of the LSTM model’s architecture and hyperparameters can enhance performance.</li><li><strong>Broader Application</strong>: Extending the model to other assets and timeframes could provide additional insights and opportunities.</li><li><strong>Live Trading</strong>: Testing the strategy in a live trading environment would provide practical insights and validate its real-world applicability.</li></ol><p>Overall, the LSTM model’s ability to capture complex patterns in time series data makes it a powerful tool for trading strategies, with significant potential for generating high returns.</p><h3>Applying neural network LSTM Model for Other Assets and Short List the Best:</h3><p>From here on we will explain about how to use the same trained model to short list best assets after doing certain backtest on all the assets after downloading the data from tradingview for backtest</p><h4>Importing Necessary packages and setting up Model &amp; Exchange APi with CCXT</h4><pre>import time<br>import logging<br>import io<br>import contextlib<br>import glob<br>import ccxt<br>from datetime import datetime, timedelta, timezone<br>import keras<br>from keras.models import save_model, load_model<br>import numpy as np<br>import pandas as pd<br>import talib as ta<br>from sklearn.preprocessing import MinMaxScaler<br>import warnings<br>from threading import Thread, Event<br>import decimal<br>import joblib<br>from tcn import TCN<br><br># from pandas.core.computation import PerformanceWarning<br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br># NOTE: Train your own model from the other notebook I have shared and use the most successful trained model here.<br># model_file_path = &#39;./model_lstm_1tp_1sl_2p5SlTp_April_5th_ShRa_1_49_15m.hdf5&#39;<br>model_file_path = &#39;./model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&#39;<br>model_name = model_file_path.split(&#39;/&#39;)[-1]<br>##################################### TO Load A Model #######################################<br># NOTE: for LSTM based neural network model  you can directly load_model with model_file_path as given below<br># Load your pre-trained model, keras trained model will only take load_model from keras.models and not from joblib<br>model = load_model(model_file_path)<br># # or<br># model = tf.keras.models.load_model(model_file_path)<br># NOTE: for TCN based neural network model, you need to add custom_objects while loading the model, it is given below<br># # Define a dictionary to specify custom objects<br># custom_objects = {&#39;TCN&#39;: TCN}<br># model = load_model(model_file_path, custom_objects = custom_objects)<br><br>##########################################################################################<br>########################## Adding the exchange information ##############################<br>exchange = ccxt.binanceusdm(<br>    {<br>        &#39;enableRateLimit&#39;: True,  # required by the Manual<br>        # Add any other authentication parameters if needed<br>        &#39;rateLimit&#39;: 250, &#39;verbose&#39;: True<br>    }<br>    )<br># NOTE: I used https://testnet.binancefuture.com/en/futures/BTCUSDT for testnet API (this has very bad liquidity issue for various assets and many other issues but can be used for purely testiug purpose)<br>#  kraken testnet creds pubkey - K9dS2SK8JURMl9F300lguUhOS/ao3HM+tfRMgJGed+JhDfpJhvsC/y           privatekey - /J/03PPyPwsrPsKZYtLqOQNPLKZJattT6i15Bpg14/6ALokHHY/MBb1p6tYKyFgkKXIJIOMbBsFRfL3aBZUvQ1<br># api_key = &#39;8f7080f8821b58a53f5c49f00cbff7fdcce1cca9c9154ea&#39;<br># secret_key = &#39;1e58391a46a7dbb098aa5121d3e69e3a6660ba8c38f&#39;<br><br># exchange.apiKey = api_key<br># exchange.secret = secret_key<br># exchange.set_sandbox_mode(True)<br><br># NOTE: if u want to go live, un commenb below 5 lines and comment 5 lines above and change to your own api_key and secret_key (below one ius a dummy and also make sure to give &quot;futres&quot; permission while creating your api in the exchange)<br>api_key = &#39;CxUdC80c3Y5Nf1iRJMZJelOCfFJWISbQsasPraCb4Zdskx7MM8uCl&#39;<br>secret_key = &#39;p4XwsZwmmNswzDHzE5TSUOgXT5tASArfSO0pxfYrBMtezlCpDGtz&#39;<br>exchange.apiKey = api_key<br>exchange.secret = secret_key<br>exchange.set_sandbox_mode(False)<br>#######################################################################################<br>    # exchange.set_sandbox_mode(True)<br>exchange.has<br># exchange.fetchBalance()[&quot;info&quot;][&quot;assets&quot;]<br>exchange.options = {&#39;defaultType&#39;: &#39;future&#39;, # or &#39;margin&#39; or &#39;spot&#39;<br>                    &#39;timeDifference&#39;: 0,  # Set an appropriate initial value for time difference<br>                        &#39;adjustForTimeDifference&#39;: True,<br>                        &#39;newOrderRespType&#39;: &#39;FULL&#39;,<br>                        &#39;defaultTimeInForce&#39;: &#39;GTC&#39;}</pre><p>The provided code snippet demonstrates how to load our trained model and connect to a cryptocurrency exchange (Binance) for potential shortlisting of assets based on backtesting. Here’s a breakdown:</p><p><strong>Imports:</strong></p><ul><li>Standard libraries for time, logging, data manipulation (pandas, numpy), machine learning (Keras, scikit-learn), technical indicators (talib), threading, and others.</li></ul><p><strong>Model Loading:</strong></p><ul><li>Comments explain the difference in loading a model based on its type:</li><li><strong>LSTM Model:</strong> Uses load_model from keras.models directly (as shown in your code).</li><li><strong>TCN Model:</strong> Requires specifying custom objects (custom_objects={&#39;TCN&#39;: TCN}) during loading.</li></ul><p><strong>Exchange Connection:</strong></p><ul><li>Creates a ccxt.binanceusdm object (exchange) to interact with the Binance exchange.</li><li>Sets API credentials and enables rate limiting for responsible API usage.</li><li>Comments mention testnet and live API usage options.</li></ul><p><strong>Important Notes:</strong></p><ul><li><strong>Replace API Keys:</strong> Replace the dummy api_key and secret_key with your actual Binance API credentials (if going live). Ensure your API has &quot;futures&quot; permission.</li><li><strong>Backtesting Not Shown:</strong> This code focuses on model loading and exchange connection. The actual backtesting loop and asset shortlisting logic are not included.</li></ul><p><strong>Next Steps:</strong></p><ol><li><strong>Backtesting Loop:</strong> You’ll need to implement a loop to iterate through your desired assets:</li></ol><ul><li>Download historical data from the exchange (using exchange.fetch_ohlcv) for each asset.</li><li>Preprocess the data (scaling, feature engineering).</li><li>Make predictions using your loaded model (model.predict).</li><li>Apply your backtesting strategy (similar to previous examples) incorporating predictions and potentially technical indicators.</li><li>Store backtesting results for each asset.</li></ul><ol><li><strong>Shortlisting:</strong> Analyze the stored backtesting results and apply filters/sorting based on your chosen metrics to shortlist the best-performing assets.</li><li><strong>Risk Management:</strong> Remember, backtesting is for evaluation, not a guarantee of future success. Implement proper risk management strategies before using these shortlisted assets in real trading.</li></ol><pre>from sklearn.preprocessing import MinMaxScaler<br>from backtesting import Strategy, Backtest<br>import os<br>import json<br>import pandas as pd<br>import talib as ta<br>import numpy as np<br>from concurrent.futures import ThreadPoolExecutor<br>import threading<br><br>import time<br>import ccxt<br>from keras.models import save_model, load_model<br>import warnings<br>import decimal<br>import joblib<br>import nest_asyncio<br># from pandas.core.computation import PerformanceWarning<br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br># Load your pre-trained model<br># model = load_model(&#39;best_model_tcn_1sl_1tp_2p5SlTp_success.pkl&#39;)<br># Define the custom_assets dictionary outside the loop<br>custom_assets = {}<br># Function to load custom_assets from a text file<br>def load_custom_assets():<br>    if os.path.exists(&#39;custom_assets.txt&#39;):<br>        try:<br>            with open(&#39;custom_assets.txt&#39;, &#39;r&#39;) as txt_file:<br>                return json.loads(txt_file.read())<br>        except json.JSONDecodeError as e:<br>            print(f&quot;Error decoding JSON in custom_assets.txt: {e}&quot;)<br>            return {}<br>    else:<br>        print(&quot;custom_assets.txt file not found. Initializing an empty dictionary.&quot;)<br>        custom_assets = {}<br>        save_custom_assets(custom_assets)<br>        return custom_assets<br># Define a threading lock<br>file_lock = threading.Lock()<br># Function to save custom_assets to a text file<br>def save_custom_assets(custom_assets):<br>    with file_lock:<br>        with open(&#39;custom_assets.txt&#39;, &#39;w&#39;) as txt_file:<br>            json.dump(custom_assets, txt_file, indent=4)</pre><p>The provided code focuses on managing custom assets and preparing for multi-threaded backtesting. Here’s a breakdown:</p><p><strong>Imports:</strong></p><ul><li>Includes libraries for data manipulation (pandas, numpy), technical indicators (talib), backtesting framework (backtesting), threading, and others.</li></ul><p><strong>Custom Assets Management:</strong></p><p>custom_assets dictionary:</p><ul><li>Stores custom assets for backtesting (likely symbols or names).</li></ul><p>load_custom_assets function:</p><ul><li>Checks for a file named custom_assets.txt.</li><li>If the file exists, attempts to load the dictionary from the JSON content. Handles potential JSON decoding errors.</li><li>If the file doesn’t exist, initializes an empty dictionary, saves it, and returns it.</li></ul><p>save_custom_assets function:</p><ul><li>Uses a threading lock (file_lock) to ensure safe access to the file during potential concurrent writes.</li><li>Saves the custom_assets dictionary as JSON to the custom_assets.txt file.</li></ul><p><strong>Next Steps:</strong></p><ol><li><strong>Backtesting Function:</strong> You’ll likely define a function for the backtesting logic. This function would:</li></ol><ul><li>Take an asset symbol as input.</li><li>Download historical data for the asset.</li><li>Preprocess the data (scaling, feature engineering).</li><li>Make predictions using your loaded model.</li><li>Apply your backtesting strategy (similar to previous examples) incorporating predictions and potentially technical indicators.</li><li>Calculate and store backtesting results (Sharpe Ratio, drawdown, etc.) for the asset.</li></ul><p><strong>2. Multithreaded Backtesting:</strong></p><ul><li>You can utilize the ThreadPoolExecutor and threading capabilities to download and backtest multiple assets simultaneously. This can significantly improve efficiency compared to a sequential approach.</li><li>The custom_assets dictionary and its management functions will be crucial for providing asset symbols to the backtesting function within the thread pool.</li></ul><p><strong>Additional Notes:</strong></p><ul><li>Remember to replace &#39;best_model_tcn_1sl_1tp_2p5SlTp_success.pkl&#39; with the actual path to your trained model file.</li><li>Consider error handling and logging mechanisms for potential issues during data download, backtesting calculations, or thread management.</li></ul><pre>#NOTE: Fetching from binance Futures perpetual USDT assets , if error 4xx accours, it means, there is some restriction from your government or VPN server is connected to restrcited area for binance to work. You can use assets from the collection given by me in next cell<br><br>import requests<br>def get_binance_futures_assets():<br>    url = &quot;https://fapi.binance.com/fapi/v1/exchangeInfo&quot;<br>    try:<br>        response = requests.get(url)<br>        response.raise_for_status()  # Raise an exception for 4xx and 5xx status codes<br>        data = response.json()<br>        assets = [asset[&#39;symbol&#39;] for asset in data[&#39;symbols&#39;] if asset[&#39;contractType&#39;] == &#39;PERPETUAL&#39; and asset[&#39;quoteAsset&#39;] == &#39;USDT&#39;]<br>        return assets<br>    except requests.exceptions.RequestException as e:<br>        print(&quot;Failed to fetch Binance futures assets:&quot;, e)<br>        return []<br># Get all Binance futures USDT perpetual assets<br>futures_assets = get_binance_futures_assets()<br>print(&quot;Binance Futures USDT Perpetual Assets:&quot;)<br>print(futures_assets, len(futures_assets))</pre><pre>output:<br>&#39;BTCUSDT.P&#39;, &#39;ETHUSDT.P&#39;, &#39;BCHUSDT.P&#39;, &#39;XRPUSDT.P&#39;, &#39;EOSUSDT.P&#39;, &#39;LTCUSDT.P&#39;, &#39;TRXUSDT.P&#39;, &#39;ETCUSDT.P&#39;, <br>        &#39;LINKUSDT.P&#39;, &#39;XLMUSDT.P&#39;, &#39;ADAUSDT.P&#39;, &#39;XMRUSDT.P&#39;, &#39;DASHUSDT.P&#39;, &#39;ZECUSDT.P&#39;, &#39;XTZUSDT.P&#39;, &#39;BNBUSDT.P&#39;, <br>        &#39;ATOMUSDT.P&#39;, &#39;ONTUSDT.P&#39;, &#39;IOTAUSDT.P&#39;, &#39;BATUSDT.P&#39;, &#39;VETUSDT.P&#39;, &#39;NEOUSDT.P&#39;, &#39;QTUMUSDT.P&#39;, &#39;IOSTUSDT.P&#39;, <br>        &#39;THETAUSDT.P&#39;, &#39;ALGOUSDT.P&#39;, &#39;ZILUSDT.P&#39;, &#39;KNCUSDT.P&#39;, &#39;ZRXUSDT.P&#39;, &#39;COMPUSDT.P&#39;, &#39;OMGUSDT.P&#39;, &#39;DOGEUSDT.P&#39;, <br>        &#39;SXPUSDT.P&#39;, &#39;KAVAUSDT.P&#39;, &#39;BANDUSDT.P&#39;, &#39;RLCUSDT.P&#39;, &#39;WAVESUSDT.P&#39;, &#39;MKRUSDT.P&#39;, &#39;SNXUSDT.P&#39;, &#39;DOTUSDT.P&#39;, <br>        &#39;DEFIUSDT.P&#39;, &#39;YFIUSDT.P&#39;, &#39;BALUSDT.P&#39;, &#39;CRVUSDT.P&#39;, &#39;TRBUSDT.P&#39;, &#39;RUNEUSDT.P&#39;, &#39;SUSHIUSDT.P&#39;, &#39;SRMUSDT.P&#39;, <br>        &#39;EGLDUSDT.P&#39;, &#39;SOLUSDT.P&#39;, &#39;ICXUSDT.P&#39;, &#39;STORJUSDT.P&#39;, &#39;BLZUSDT.P&#39;, &#39;UNIUSDT.P&#39;, &#39;AVAXUSDT.P&#39;, &#39;FTMUSDT.P&#39;, <br>        &#39;HNTUSDT.P&#39;, &#39;ENJUSDT.P&#39;, &#39;FLMUSDT.P&#39;, &#39;TOMOUSDT.P&#39;, &#39;RENUSDT.P&#39;, &#39;KSMUSDT.P&#39;, &#39;NEARUSDT.P&#39;, &#39;AAVEUSDT.P&#39;, <br>        &#39;FILUSDT.P&#39;, &#39;RSRUSDT.P&#39;, &#39;LRCUSDT.P&#39;, &#39;MATICUSDT.P&#39;, &#39;OCEANUSDT.P&#39;, &#39;CVCUSDT.P&#39;, &#39;BELUSDT.P&#39;, &#39;CTKUSDT.P&#39;, <br>        &#39;AXSUSDT.P&#39;, &#39;ALPHAUSDT.P&#39;, &#39;ZENUSDT.P&#39;, &#39;SKLUSDT.P&#39;, &#39;GRTUSDT.P&#39;, &#39;1INCHUSDT.P&#39;, &#39;CHZUSDT.P&#39;, &#39;SANDUSDT.P&#39;, <br>        &#39;ANKRUSDT.P&#39;, &#39;BTSUSDT.P&#39;, &#39;LITUSDT.P&#39;, &#39;UNFIUSDT.P&#39;, &#39;REEFUSDT.P&#39;, &#39;RVNUSDT.P&#39;, &#39;SFPUSDT.P&#39;, &#39;XEMUSDT.P&#39;, <br>        &#39;COTIUSDT.P&#39;, &#39;CHRUSDT.P&#39;, &#39;MANAUSDT.P&#39;, &#39;ALICEUSDT.P&#39;, &#39;HBARUSDT.P&#39;, &#39;ONEUSDT.P&#39;, &#39;LINAUSDT.P&#39;, &#39;STMXUSDT.P&#39;, <br>        &#39;DENTUSDT.P&#39;, &#39;CELRUSDT.P&#39;, &#39;HOTUSDT.P&#39;, &#39;MTLUSDT.P&#39;, &#39;OGNUSDT.P&#39;, &#39;NKNUSDT.P&#39;, &#39;SCUSDT.P&#39;, &#39;DGBUSDT.P&#39;, <br>        &#39;1000SHIBUSDT.P&#39;, &#39;BAKEUSDT.P&#39;, &#39;GTCUSDT.P&#39;, &#39;BTCDOMUSDT.P&#39;, &#39;IOTXUSDT.P&#39;, &#39;AUDIOUSDT.P&#39;, &#39;RAYUSDT.P&#39;, &#39;C98USDT.P&#39;, <br>        &#39;MASKUSDT.P&#39;, &#39;ATAUSDT.P&#39;, &#39;DYDXUSDT.P&#39;, &#39;1000XECUSDT.P&#39;, &#39;GALAUSDT.P&#39;, &#39;CELOUSDT.P&#39;, &#39;ARUSDT.P&#39;, &#39;KLAYUSDT.P&#39;, <br>        &#39;ARPAUSDT.P&#39;, &#39;CTSIUSDT.P&#39;, &#39;LPTUSDT.P&#39;, &#39;ENSUSDT.P&#39;, &#39;PEOPLEUSDT.P&#39;, &#39;ANTUSDT.P&#39;, &#39;ROSEUSDT.P&#39;, &#39;DUSKUSDT.P&#39;, <br>        &#39;FLOWUSDT.P&#39;, &#39;IMXUSDT.P&#39;, &#39;API3USDT.P&#39;, &#39;GMTUSDT.P&#39;, &#39;APEUSDT.P&#39;, &#39;WOOUSDT.P&#39;, &#39;FTTUSDT.P&#39;, &#39;JASMYUSDT.P&#39;, &#39;DARUSDT.P&#39;, <br>        &#39;GALUSDT.P&#39;, &#39;OPUSDT.P&#39;, &#39;INJUSDT.P&#39;, &#39;STGUSDT.P&#39;, &#39;FOOTBALLUSDT.P&#39;, &#39;SPELLUSDT.P&#39;, &#39;1000LUNCUSDT.P&#39;, <br>        &#39;LUNA2USDT.P&#39;, &#39;LDOUSDT.P&#39;, &#39;CVXUSDT.P&#39;, &#39;ICPUSDT.P&#39;, &#39;APTUSDT.P&#39;, &#39;QNTUSDT.P&#39;, &#39;BLUEBIRDUSDT.P&#39;, &#39;FETUSDT.P&#39;, <br>        &#39;FXSUSDT.P&#39;, &#39;HOOKUSDT.P&#39;, &#39;MAGICUSDT.P&#39;, &#39;TUSDT.P&#39;, &#39;RNDRUSDT.P&#39;, &#39;HIGHUSDT.P&#39;, &#39;MINAUSDT.P&#39;, &#39;ASTRUSDT.P&#39;, <br>        &#39;AGIXUSDT.P&#39;, &#39;PHBUSDT.P&#39;, &#39;GMXUSDT.P&#39;, &#39;CFXUSDT.P&#39;, &#39;STXUSDT.P&#39;, &#39;COCOSUSDT.P&#39;, &#39;BNXUSDT.P&#39;, &#39;ACHUSDT.P&#39;, <br>        &#39;SSVUSDT.P&#39;, &#39;CKBUSDT.P&#39;, &#39;PERPUSDT.P&#39;, &#39;TRUUSDT.P&#39;, &#39;LQTYUSDT.P&#39;, &#39;USDCUSDT.P&#39;, &#39;IDUSDT.P&#39;, &#39;ARBUSDT.P&#39;, <br>        &#39;JOEUSDT.P&#39;, &#39;TLMUSDT.P&#39;, &#39;AMBUSDT.P&#39;, &#39;LEVERUSDT.P&#39;, &#39;RDNTUSDT.P&#39;, &#39;HFTUSDT.P&#39;, &#39;XVSUSDT.P&#39;, &#39;BLURUSDT.P&#39;, <br>        &#39;EDUUSDT.P&#39;, &#39;IDEXUSDT.P&#39;, &#39;SUIUSDT.P&#39;, &#39;1000PEPEUSDT.P&#39;, &#39;1000FLOKIUSDT.P&#39;, &#39;UMAUSDT.P&#39;, &#39;RADUSDT.P&#39;, <br>        &#39;KEYUSDT.P&#39;, &#39;COMBOUSDT.P&#39;, &#39;NMRUSDT.P&#39;, &#39;MAVUSDT.P&#39;, &#39;MDTUSDT.P&#39;, &#39;XVGUSDT.P&#39;, &#39;WLDUSDT.P&#39;, &#39;PENDLEUSDT.P&#39;, <br>        &#39;ARKMUSDT.P&#39;, &#39;AGLDUSDT.P&#39;, &#39;YGGUSDT.P&#39;, &#39;DODOXUSDT.P&#39;, &#39;BNTUSDT.P&#39;, &#39;OXTUSDT.P&#39;, &#39;SEIUSDT.P&#39;, &#39;CYBERUSDT.P&#39;, <br>        &#39;HIFIUSDT.P&#39;, &#39;ARKUSDT.P&#39;, &#39;FRONTUSDT.P&#39;, &#39;GLMRUSDT.P&#39;, &#39;BICOUSDT.P&#39;, &#39;STRAXUSDT.P&#39;, &#39;LOOMUSDT.P&#39;, &#39;BIGTIMEUSDT.P&#39;, <br>        &#39;BONDUSDT.P&#39;, &#39;ORBSUSDT.P&#39;, &#39;STPTUSDT.P&#39;, &#39;WAXPUSDT.P&#39;, &#39;BSVUSDT.P&#39;, &#39;RIFUSDT.P&#39;, &#39;POLYXUSDT.P&#39;, &#39;GASUSDT.P&#39;, <br>        &#39;POWRUSDT.P&#39;, &#39;SLPUSDT.P&#39;, &#39;TIAUSDT.P&#39;, &#39;SNTUSDT.P&#39;, &#39;CAKEUSDT.P&#39;, &#39;MEMEUSDT.P&#39;, &#39;TWTUSDT.P&#39;, &#39;TOKENUSDT.P&#39;, <br>        &#39;ORDIUSDT.P&#39;, &#39;STEEMUSDT.P&#39;, &#39;BADGERUSDT.P&#39;, &#39;ILVUSDT.P&#39;, &#39;NTRNUSDT.P&#39;, &#39;MBLUSDT.P&#39;, &#39;KASUSDT.P&#39;, &#39;BEAMXUSDT.P&#39;, <br>        &#39;1000BONKUSDT.P&#39;, &#39;PYTHUSDT.P&#39;, &#39;SUPERUSDT.P&#39;, &#39;USTCUSDT.P&#39;, &#39;ONGUSDT.P&#39;, &#39;ETHWUSDT.P&#39;, &#39;JTOUSDT.P&#39;, &#39;1000SATSUSDT.P&#39;, <br>        &#39;AUCTIONUSDT.P&#39;, &#39;1000RATSUSDT.P&#39;, &#39;ACEUSDT.P&#39;, &#39;MOVRUSDT.P&#39;, &#39;NFPUSDT.P&#39;, &#39;AIUSDT.P&#39;, &#39;XAIUSDT.P&#39;, <br>        &#39;WIFUSDT.P&#39;, &#39;MANTAUSDT.P&#39;, &#39;ONDOUSDT.P&#39;, &#39;LSKUSDT.P&#39;, &#39;ALTUSDT.P&#39;, &#39;JUPUSDT.P&#39;, &#39;ZETAUSDT.P&#39;, &#39;RONINUSDT.P&#39;, <br>        &#39;DYMUSDT.P&#39;, &#39;OMUSDT.P&#39;, &#39;PIXELUSDT.P&#39;, &#39;STRKUSDT.P&#39;, &#39;MAVIAUSDT.P&#39;, &#39;GLMUSDT.P&#39;, &#39;PORTALUSDT.P&#39;, &#39;TONUSDT.P&#39;, <br>        &#39;AXLUSDT.P&#39;, &#39;MYROUSDT.P&#39;, &#39;METISUSDT.P&#39;, &#39;AEVOUSDT.P&#39;, &#39;VANRYUSDT.P&#39;, &#39;BOMEUSDT.P&#39;, &#39;ETHFIUSDT.P&#39;, &#39;ENAUSDT.P&#39;, <br>        &#39;WUSDT.P&#39;, &#39;TNSRUSDT.P&#39;, &#39;SAGAUSDT.P&#39;, &#39;TAOUSDT.P&#39;, &#39;OMNIUSDT.P&#39;, &#39;REZUSDT.P&#39;</pre><p>This code snippet retrieves a list of perpetual USDT contracts available on Binance Futures using the official Binance API. Here’s a breakdown:</p><p><strong>Function:</strong></p><p>get_binance_futures_assets function:</p><ul><li>Defines the API endpoint URL for retrieving exchange information.</li><li>Uses a try-except block to handle potential errors during the request.</li></ul><p>Within the try block:</p><ul><li>Makes a GET request to the Binance API endpoint.</li><li>Raises an exception for status codes in the 4xx (client errors) or 5xx (server errors) range to indicate failures.</li><li>Parses the JSON response from the successful request.</li></ul><p>Extracts symbols from the response data:</p><ul><li>Iterates through the &#39;symbols&#39; list in the JSON data.</li></ul><p>Filters for assets with these criteria:</p><ul><li>&#39;contractType&#39; is &#39;PERPETUAL&#39; (indicates perpetual contracts).</li><li>&#39;quoteAsset&#39; is &#39;USDT&#39; (indicates USDT-quoted contracts).</li><li>Creates a list of asset symbols meeting the criteria and returns it.</li><li>The except block catches potential request exceptions and prints an error message. It also returns an empty list in case of failures.</li></ul><p><strong>Printing Results:</strong></p><ul><li>Calls the get_binance_futures_assets function to retrieve the asset list.</li><li>Prints a message indicating the retrieved assets and their count.</li></ul><p><strong>Additional Notes:</strong></p><ul><li>This approach leverages the official Binance API, which might be subject to rate limits or changes in the future. Consider implementing appropriate error handling and retry mechanisms.</li><li>The code assumes a successful API call. You might want to add checks for specific error codes (e.g., 429 for “Too Many Requests”) and handle them gracefully (e.g., retrying after a delay).</li></ul><pre># !pip install --upgrade --no-cache-dir git+https://github.com/rongardF/tvdatafeed.git<br><br>import os<br>import json<br>import asyncio<br>from datetime import datetime, timedelta<br>import pandas as pd<br>from tvDatafeed import TvDatafeed, Interval<br># Initialize TvDatafeed object<br># username = &#39;YourTradingViewUsername&#39;<br># password = &#39;YourTradingViewPassword&#39;<br># tv = TvDatafeed(username, password)<br>tv = TvDatafeed()<br>timeframe = &#39;15m&#39;<br>interval = None<br>if timeframe == &#39;1m&#39;:<br>    interval = Interval.in_1_minute<br>elif timeframe == &#39;3m&#39;:<br>    interval = Interval.in_3_minute<br>elif timeframe == &#39;5m&#39;:<br>    interval = Interval.in_5_minute<br>elif timeframe == &#39;15m&#39;:<br>    interval = Interval.in_15_minute<br>elif timeframe == &#39;30m&#39;:<br>    interval = Interval.in_30_minute<br>elif timeframe == &#39;45m&#39;:<br>    interval = Interval.in_45_minute<br>elif timeframe == &#39;1h&#39;:<br>    interval = Interval.in_1_hour<br>elif timeframe == &#39;2h&#39;:<br>    interval = Interval.in_2_hour<br>elif timeframe == &#39;4h&#39;:<br>    interval = Interval.in_4_hour<br>elif timeframe == &#39;1d&#39;:<br>    interval = Interval.in_daily<br>elif timeframe == &#39;1w&#39;:<br>    interval = Interval.in_weekly<br>elif timeframe == &#39;1M&#39;:<br>    interval = Interval.in_monthly<br># NOTE: List of symbols around 126 mentioned here. You can change to your own set of lists if you know the tradingview code for the symbol you want to download.<br>data = [<br>    &#39;BTCUSDT.P&#39;, &#39;ETHUSDT.P&#39;, &#39;BCHUSDT.P&#39;, &#39;XRPUSDT.P&#39;, &#39;EOSUSDT.P&#39;, &#39;LTCUSDT.P&#39;, &#39;TRXUSDT.P&#39;, &#39;ETCUSDT.P&#39;, <br>        &#39;LINKUSDT.P&#39;, &#39;XLMUSDT.P&#39;, &#39;ADAUSDT.P&#39;, &#39;XMRUSDT.P&#39;, &#39;DASHUSDT.P&#39;, &#39;ZECUSDT.P&#39;, &#39;XTZUSDT.P&#39;, &#39;BNBUSDT.P&#39;, <br>        &#39;ATOMUSDT.P&#39;, &#39;ONTUSDT.P&#39;, &#39;IOTAUSDT.P&#39;, &#39;BATUSDT.P&#39;, &#39;VETUSDT.P&#39;, &#39;NEOUSDT.P&#39;, &#39;QTUMUSDT.P&#39;, &#39;IOSTUSDT.P&#39;, <br>        &#39;THETAUSDT.P&#39;, &#39;ALGOUSDT.P&#39;, &#39;ZILUSDT.P&#39;, &#39;KNCUSDT.P&#39;, &#39;ZRXUSDT.P&#39;, &#39;COMPUSDT.P&#39;, &#39;OMGUSDT.P&#39;, &#39;DOGEUSDT.P&#39;, <br>        &#39;SXPUSDT.P&#39;, &#39;KAVAUSDT.P&#39;, &#39;BANDUSDT.P&#39;, &#39;RLCUSDT.P&#39;, &#39;WAVESUSDT.P&#39;, &#39;MKRUSDT.P&#39;, &#39;SNXUSDT.P&#39;, &#39;DOTUSDT.P&#39;, <br>        &#39;DEFIUSDT.P&#39;, &#39;YFIUSDT.P&#39;, &#39;BALUSDT.P&#39;, &#39;CRVUSDT.P&#39;, &#39;TRBUSDT.P&#39;, &#39;RUNEUSDT.P&#39;, &#39;SUSHIUSDT.P&#39;, &#39;SRMUSDT.P&#39;, <br>        &#39;EGLDUSDT.P&#39;, &#39;SOLUSDT.P&#39;, &#39;ICXUSDT.P&#39;, &#39;STORJUSDT.P&#39;, &#39;BLZUSDT.P&#39;, &#39;UNIUSDT.P&#39;, &#39;AVAXUSDT.P&#39;, &#39;FTMUSDT.P&#39;, <br>        &#39;HNTUSDT.P&#39;, &#39;ENJUSDT.P&#39;, &#39;FLMUSDT.P&#39;, &#39;TOMOUSDT.P&#39;, &#39;RENUSDT.P&#39;, &#39;KSMUSDT.P&#39;, &#39;NEARUSDT.P&#39;, &#39;AAVEUSDT.P&#39;, <br>        &#39;FILUSDT.P&#39;, &#39;RSRUSDT.P&#39;, &#39;LRCUSDT.P&#39;, &#39;MATICUSDT.P&#39;, &#39;OCEANUSDT.P&#39;, &#39;CVCUSDT.P&#39;, &#39;BELUSDT.P&#39;, &#39;CTKUSDT.P&#39;, <br>        &#39;AXSUSDT.P&#39;, &#39;ALPHAUSDT.P&#39;, &#39;ZENUSDT.P&#39;, &#39;SKLUSDT.P&#39;, &#39;GRTUSDT.P&#39;, &#39;1INCHUSDT.P&#39;, &#39;CHZUSDT.P&#39;, &#39;SANDUSDT.P&#39;, <br>        &#39;ANKRUSDT.P&#39;, &#39;BTSUSDT.P&#39;, &#39;LITUSDT.P&#39;, &#39;UNFIUSDT.P&#39;, &#39;REEFUSDT.P&#39;, &#39;RVNUSDT.P&#39;, &#39;SFPUSDT.P&#39;, &#39;XEMUSDT.P&#39;, <br>        &#39;COTIUSDT.P&#39;, &#39;CHRUSDT.P&#39;, &#39;MANAUSDT.P&#39;, &#39;ALICEUSDT.P&#39;, &#39;HBARUSDT.P&#39;, &#39;ONEUSDT.P&#39;, &#39;LINAUSDT.P&#39;, &#39;STMXUSDT.P&#39;, <br>        &#39;DENTUSDT.P&#39;, &#39;CELRUSDT.P&#39;, &#39;HOTUSDT.P&#39;, &#39;MTLUSDT.P&#39;, &#39;OGNUSDT.P&#39;, &#39;NKNUSDT.P&#39;, &#39;SCUSDT.P&#39;, &#39;DGBUSDT.P&#39;, <br>        &#39;1000SHIBUSDT.P&#39;, &#39;BAKEUSDT.P&#39;, &#39;GTCUSDT.P&#39;, &#39;BTCDOMUSDT.P&#39;, &#39;IOTXUSDT.P&#39;, &#39;AUDIOUSDT.P&#39;, &#39;RAYUSDT.P&#39;, &#39;C98USDT.P&#39;, <br>        &#39;MASKUSDT.P&#39;, &#39;ATAUSDT.P&#39;, &#39;DYDXUSDT.P&#39;, &#39;1000XECUSDT.P&#39;, &#39;GALAUSDT.P&#39;, &#39;CELOUSDT.P&#39;, &#39;ARUSDT.P&#39;, &#39;KLAYUSDT.P&#39;, <br>        &#39;ARPAUSDT.P&#39;, &#39;CTSIUSDT.P&#39;, &#39;LPTUSDT.P&#39;, &#39;ENSUSDT.P&#39;, &#39;PEOPLEUSDT.P&#39;, &#39;ANTUSDT.P&#39;, &#39;ROSEUSDT.P&#39;, &#39;DUSKUSDT.P&#39;, <br>        &#39;FLOWUSDT.P&#39;, &#39;IMXUSDT.P&#39;, &#39;API3USDT.P&#39;, &#39;GMTUSDT.P&#39;, &#39;APEUSDT.P&#39;, &#39;WOOUSDT.P&#39;, &#39;FTTUSDT.P&#39;, &#39;JASMYUSDT.P&#39;, &#39;DARUSDT.P&#39;, <br>        &#39;GALUSDT.P&#39;, &#39;OPUSDT.P&#39;, &#39;INJUSDT.P&#39;, &#39;STGUSDT.P&#39;, &#39;FOOTBALLUSDT.P&#39;, &#39;SPELLUSDT.P&#39;, &#39;1000LUNCUSDT.P&#39;, <br>        &#39;LUNA2USDT.P&#39;, &#39;LDOUSDT.P&#39;, &#39;CVXUSDT.P&#39;, &#39;ICPUSDT.P&#39;, &#39;APTUSDT.P&#39;, &#39;QNTUSDT.P&#39;, &#39;BLUEBIRDUSDT.P&#39;, &#39;FETUSDT.P&#39;, <br>        &#39;FXSUSDT.P&#39;, &#39;HOOKUSDT.P&#39;, &#39;MAGICUSDT.P&#39;, &#39;TUSDT.P&#39;, &#39;RNDRUSDT.P&#39;, &#39;HIGHUSDT.P&#39;, &#39;MINAUSDT.P&#39;, &#39;ASTRUSDT.P&#39;, <br>        &#39;AGIXUSDT.P&#39;, &#39;PHBUSDT.P&#39;, &#39;GMXUSDT.P&#39;, &#39;CFXUSDT.P&#39;, &#39;STXUSDT.P&#39;, &#39;COCOSUSDT.P&#39;, &#39;BNXUSDT.P&#39;, &#39;ACHUSDT.P&#39;, <br>        &#39;SSVUSDT.P&#39;, &#39;CKBUSDT.P&#39;, &#39;PERPUSDT.P&#39;, &#39;TRUUSDT.P&#39;, &#39;LQTYUSDT.P&#39;, &#39;USDCUSDT.P&#39;, &#39;IDUSDT.P&#39;, &#39;ARBUSDT.P&#39;, <br>        &#39;JOEUSDT.P&#39;, &#39;TLMUSDT.P&#39;, &#39;AMBUSDT.P&#39;, &#39;LEVERUSDT.P&#39;, &#39;RDNTUSDT.P&#39;, &#39;HFTUSDT.P&#39;, &#39;XVSUSDT.P&#39;, &#39;BLURUSDT.P&#39;, <br>        &#39;EDUUSDT.P&#39;, &#39;IDEXUSDT.P&#39;, &#39;SUIUSDT.P&#39;, &#39;1000PEPEUSDT.P&#39;, &#39;1000FLOKIUSDT.P&#39;, &#39;UMAUSDT.P&#39;, &#39;RADUSDT.P&#39;, <br>        &#39;KEYUSDT.P&#39;, &#39;COMBOUSDT.P&#39;, &#39;NMRUSDT.P&#39;, &#39;MAVUSDT.P&#39;, &#39;MDTUSDT.P&#39;, &#39;XVGUSDT.P&#39;, &#39;WLDUSDT.P&#39;, &#39;PENDLEUSDT.P&#39;, <br>        &#39;ARKMUSDT.P&#39;, &#39;AGLDUSDT.P&#39;, &#39;YGGUSDT.P&#39;, &#39;DODOXUSDT.P&#39;, &#39;BNTUSDT.P&#39;, &#39;OXTUSDT.P&#39;, &#39;SEIUSDT.P&#39;, &#39;CYBERUSDT.P&#39;, <br>        &#39;HIFIUSDT.P&#39;, &#39;ARKUSDT.P&#39;, &#39;FRONTUSDT.P&#39;, &#39;GLMRUSDT.P&#39;, &#39;BICOUSDT.P&#39;, &#39;STRAXUSDT.P&#39;, &#39;LOOMUSDT.P&#39;, &#39;BIGTIMEUSDT.P&#39;, <br>        &#39;BONDUSDT.P&#39;, &#39;ORBSUSDT.P&#39;, &#39;STPTUSDT.P&#39;, &#39;WAXPUSDT.P&#39;, &#39;BSVUSDT.P&#39;, &#39;RIFUSDT.P&#39;, &#39;POLYXUSDT.P&#39;, &#39;GASUSDT.P&#39;, <br>        &#39;POWRUSDT.P&#39;, &#39;SLPUSDT.P&#39;, &#39;TIAUSDT.P&#39;, &#39;SNTUSDT.P&#39;, &#39;CAKEUSDT.P&#39;, &#39;MEMEUSDT.P&#39;, &#39;TWTUSDT.P&#39;, &#39;TOKENUSDT.P&#39;, <br>        &#39;ORDIUSDT.P&#39;, &#39;STEEMUSDT.P&#39;, &#39;BADGERUSDT.P&#39;, &#39;ILVUSDT.P&#39;, &#39;NTRNUSDT.P&#39;, &#39;MBLUSDT.P&#39;, &#39;KASUSDT.P&#39;, &#39;BEAMXUSDT.P&#39;, <br>        &#39;1000BONKUSDT.P&#39;, &#39;PYTHUSDT.P&#39;, &#39;SUPERUSDT.P&#39;, &#39;USTCUSDT.P&#39;, &#39;ONGUSDT.P&#39;, &#39;ETHWUSDT.P&#39;, &#39;JTOUSDT.P&#39;, &#39;1000SATSUSDT.P&#39;, <br>        &#39;AUCTIONUSDT.P&#39;, &#39;1000RATSUSDT.P&#39;, &#39;ACEUSDT.P&#39;, &#39;MOVRUSDT.P&#39;, &#39;NFPUSDT.P&#39;, &#39;AIUSDT.P&#39;, &#39;XAIUSDT.P&#39;, <br>        &#39;WIFUSDT.P&#39;, &#39;MANTAUSDT.P&#39;, &#39;ONDOUSDT.P&#39;, &#39;LSKUSDT.P&#39;, &#39;ALTUSDT.P&#39;, &#39;JUPUSDT.P&#39;, &#39;ZETAUSDT.P&#39;, &#39;RONINUSDT.P&#39;, <br>        &#39;DYMUSDT.P&#39;, &#39;OMUSDT.P&#39;, &#39;PIXELUSDT.P&#39;, &#39;STRKUSDT.P&#39;, &#39;MAVIAUSDT.P&#39;, &#39;GLMUSDT.P&#39;, &#39;PORTALUSDT.P&#39;, &#39;TONUSDT.P&#39;, <br>        &#39;AXLUSDT.P&#39;, &#39;MYROUSDT.P&#39;, &#39;METISUSDT.P&#39;, &#39;AEVOUSDT.P&#39;, &#39;VANRYUSDT.P&#39;, &#39;BOMEUSDT.P&#39;, &#39;ETHFIUSDT.P&#39;, &#39;ENAUSDT.P&#39;, <br>        &#39;WUSDT.P&#39;, &#39;TNSRUSDT.P&#39;, &#39;SAGAUSDT.P&#39;, &#39;TAOUSDT.P&#39;, &#39;OMNIUSDT.P&#39;, &#39;REZUSDT.P&#39;<br>]<br>nest_asyncio.apply()<br># Define data download function<br>async def download_data(symbol):<br>    try:<br>        data = tv.get_hist(symbol=symbol, exchange=&#39;BINANCE&#39;, interval=interval, n_bars=20000, extended_session=True)<br>        if not data.empty:<br>            # Convert Date objects to strings<br>            # data[&#39;Date&#39;] = data.index.date.astype(str)<br>            # data[&#39;Time&#39;] = data.index.time.astype(str)<br>            data[&#39;date&#39;] = data.index.astype(str)  # Add a new column for timestamps<br>            folder_name = f&quot;tradingview_crypto_assets_{timeframe}&quot;<br>            os.makedirs(folder_name, exist_ok=True)<br>            # Replace &quot;USDT.P&quot; with &quot;/USDT:USDT&quot; in the file name<br>            symbol_file_name = symbol.replace(&quot;USDT.P&quot;, &quot;&quot;) + &quot;.json&quot;<br>            file_name = os.path.join(folder_name, symbol_file_name)<br>            # Convert DataFrame to dictionary<br>            data_dict = data.to_dict(orient=&#39;records&#39;)<br>            with open(file_name, &quot;w&quot;) as file:<br>                # Serialize dictionary to JSON<br>                json.dump(data_dict, file)<br>            print(f&quot;Data for {symbol} downloaded and saved successfully.&quot;)<br>        else:<br>            print(f&quot;No data available for {symbol}.&quot;)<br>    except Exception as e:<br>        print(f&quot;Error occurred while downloading data for {symbol}: {e}&quot;)<br># Define main function to run async download tasks<br>async def main():<br>    tasks = [download_data(symbol) for symbol in data]<br>    await asyncio.gather(*tasks)<br># Run the main function<br>asyncio.run(main())</pre><p>This code snippet demonstrates how to download historical cryptocurrency data from TradingView for multiple assets using the tvDatafeed library. Here&#39;s a breakdown:</p><p><strong>Imports:</strong></p><ul><li>Includes libraries for asynchronous programming (asyncio), working with dates (datetime), data manipulation (pandas), and file handling (os, json).</li><li>Imports the TvDatafeed class from tvDatafeed for interacting with TradingView.</li></ul><p><strong>TvDatafeed Object:</strong></p><ul><li>Initializes a TvDatafeed object (tv) without username and password (assuming a free account). Paid accounts might require credentials.</li></ul><p><strong>Timeframe and Interval:</strong></p><ul><li>Sets the desired timeframe (timeframe) for data download (e.g., &quot;15m&quot; for 15-minute intervals).</li><li>Maps the timeframe to the corresponding Interval enumeration value using a series of if statements.</li></ul><p><strong>Symbols List:</strong></p><ul><li>Defines a long list of symbols (data) representing cryptocurrencies on Binance Futures with perpetual USDT contracts (identified by &quot;.P&quot; suffix).</li></ul><p><strong>Asynchronous Programming Setup:</strong></p><ul><li>Initializes nest_asyncio.apply() to enable the use of asynchronous functions within a non-asynchronous context.</li></ul><p><strong>Download Function:</strong></p><ul><li>Defines an asynchronous function download_data(symbol) that takes a symbol as input.</li></ul><p>Attempts to download historical data for the symbol using tv.get_hist:</p><ul><li>Specifies the symbol, exchange (“BINANCE”), interval, number of bars (20000), and extended session (to potentially capture pre-market/after-market data).</li><li>Checks if downloaded data (data) is not empty.</li></ul><p>If data is available:</p><ul><li>Converts the index (timestamps) to strings in a new column named “date”.</li><li>Creates a folder named tradingview_crypto_assets_{timeframe} to store the downloaded data (creates it if it doesn&#39;t exist).</li><li>Constructs the filename by replacing “.P” with “/USDT:USDT” in the symbol and appending “.json”.</li><li>Converts the DataFrame to a dictionary using to_dict(orient=&#39;records&#39;).</li><li>Saves the dictionary as JSON to the constructed filename.</li><li>Prints a success message.</li></ul><p>If no data is available:</p><ul><li>Prints a message indicating no data for the symbol.</li><li>Catches any exceptions (Exception) during download and prints an error message with the exception details.</li></ul><p><strong>Main Function:</strong></p><ul><li>Defines an asynchronous function main that:</li><li>Creates a list of asynchronous tasks (tasks) using list comprehension. Each task calls download_data for a symbol from the data list.</li><li>Uses asyncio.gather(*tasks) to run all download tasks concurrently.</li></ul><p><strong>Running the Download:</strong></p><ul><li>Uses asyncio.run(main()) to execute the asynchronous tasks within the main function.</li></ul><p><strong>Important Notes:</strong></p><ul><li>This code retrieves data for a large number of symbols. Downloading a significant amount of data might exceed free account limitations or take a long time. Consider rate limits and adjust accordingly.</li><li>The code assumes a specific symbol format with the “.P” suffix. You might need to modify it for different symbol formats.</li><li>Error handling can be improved by implementing specific checks for different exception types (e.g., network errors, API errors).</li></ul><h4>Hyperoptimization of Multiple Assets for Specific ML/DL Model:</h4><pre>from pandas import Timestamp<br><br># Define a function to process each JSON file<br>def process_json(file_path):<br>    # try:<br>    with open(file_path, &quot;r&quot;) as f:<br>        data = json.load(f)<br>    df = pd.DataFrame(data)<br>    df.rename(columns={&#39;date&#39;: &quot;Date&quot;, &#39;open&#39;: &quot;Open&quot;, &#39;high&#39;: &quot;High&quot;, &#39;low&#39;: &quot;Low&quot;, &#39;close&#39;: &quot;Adj Close&quot;, &#39;volume&#39;: &quot;Volume&quot;}, inplace=True)<br>    df[&quot;Date&quot;] = pd.to_datetime(df[&#39;Date&#39;])<br>    df.set_index(&quot;Date&quot;, inplace=True)<br>    df[&#39;Close&#39;] = df[&#39;Adj Close&#39;]<br>    symbol_name = df[&#39;symbol&#39;].iloc[0]  # Assuming all rows have the same symbol<br>    symbol_name = symbol_name.replace(&quot;BINANCE:&quot;, &quot;&quot;)<br>    symbol_name = symbol_name.replace(&quot;USDT.P&quot;, &quot;/USDT:USDT&quot;)<br>    df.drop(columns=[&#39;symbol&#39;], inplace=True)<br>    target_prediction_number = 2<br>    time_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br>    name_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br>    new_columns = []<br>    for period in time_periods:<br>        for nperiod in name_periods:<br>            df[f&#39;ATR_{period}&#39;] = ta.ATR(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;EMA_{period}&#39;] = ta.EMA(df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;RSI_{period}&#39;] = ta.RSI(df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;VWAP_{period}&#39;] = ta.SUM(df[&#39;Volume&#39;] * (df[&#39;High&#39;] + df[&#39;Low&#39;] + df[&#39;Close&#39;]) / 3, timeperiod=period) / ta.SUM(df[&#39;Volume&#39;], timeperiod=period)<br>            df[f&#39;ROC_{period}&#39;] = ta.ROC(df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;KC_upper_{period}&#39;] = ta.EMA(df[&#39;High&#39;], timeperiod=period)<br>            df[f&#39;KC_middle_{period}&#39;] = ta.EMA(df[&#39;Low&#39;], timeperiod=period)<br>            df[f&#39;Donchian_upper_{period}&#39;] = ta.MAX(df[&#39;High&#39;], timeperiod=period)<br>            df[f&#39;Donchian_lower_{period}&#39;] = ta.MIN(df[&#39;Low&#39;], timeperiod=period)<br>            macd, macd_signal, _ = ta.MACD(df[&#39;Close&#39;], fastperiod=(period + 12), slowperiod=(period + 26), signalperiod=(period + 9))<br>            df[f&#39;MACD_{period}&#39;] = macd<br>            df[f&#39;MACD_signal_{period}&#39;] = macd_signal<br>            bb_upper, bb_middle, bb_lower = ta.BBANDS(df[&#39;Close&#39;], timeperiod=period, nbdevup=2, nbdevdn=2)<br>            df[f&#39;BB_upper_{period}&#39;] = bb_upper<br>            df[f&#39;BB_middle_{period}&#39;] = bb_middle<br>            df[f&#39;BB_lower_{period}&#39;] = bb_lower<br>            df[f&#39;EWO_{period}&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=(period+5)) - ta.SMA(df[&#39;Close&#39;], timeperiod=(period+35))<br>    df[&quot;Returns&quot;] = (df[&quot;Adj Close&quot;] / df[&quot;Adj Close&quot;].shift(target_prediction_number)) - 1<br>    df[&quot;Range&quot;] = (df[&quot;High&quot;] / df[&quot;Low&quot;]) - 1<br>    df[&quot;Volatility&quot;] = df[&#39;Returns&#39;].rolling(window=target_prediction_number).std()<br>    # Volume-Based Indicators<br>    df[&#39;OBV&#39;] = ta.OBV(df[&#39;Close&#39;], df[&#39;Volume&#39;])<br>    df[&#39;ADL&#39;] = ta.AD(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], df[&#39;Volume&#39;])<br><br>    # Momentum-Based Indicators<br>    df[&#39;Stoch_Oscillator&#39;] = ta.STOCH(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;])[0]<br>    df[&#39;PSAR&#39;] = ta.SAR(df[&#39;High&#39;], df[&#39;Low&#39;], acceleration=0.02, maximum=0.2)<br>    # More feature engineering...<br>    timeframe_diff = df.index[-1] - df.index[-2]<br>    timeframe = None<br>    # Define timeframe based on time difference<br>    if timeframe_diff == pd.Timedelta(minutes=1):<br>        timeframe = &#39;1m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=3):<br>        timeframe = &#39;3m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=5):<br>        timeframe = &#39;5m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=15):<br>        timeframe = &#39;15m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=30):<br>        timeframe = &#39;30m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=45):<br>        timeframe = &#39;45m&#39;<br>    elif timeframe_diff == pd.Timedelta(hours=1):<br>        timeframe = &#39;1h&#39;<br>    elif timeframe_diff == pd.Timedelta(days=1):<br>        timeframe = &#39;1d&#39;<br>    elif timeframe_diff == pd.Timedelta(weeks=1):<br>        timeframe = &#39;1w&#39;<br>    else:<br>        timeframe = &#39;Not sure&#39;<br>        <br>    # print(&#39;timeframe is - &#39;, timeframe)<br>    # Remove rows containing inf or nan values<br>    df.dropna(inplace=True)<br>    # Scaling<br>    scaler = MinMaxScaler(feature_range=(0,1))<br>    X = df.copy()<br>    X_scale = scaler.fit_transform(X)<br><br>    # Define a function to reshape the data<br>    def reshape_data(data, time_steps):<br>        samples = len(data) - time_steps + 1<br>        reshaped_data = np.zeros((samples, time_steps, data.shape[1]))<br>        for i in range(samples):<br>            reshaped_data[i] = data[i:i + time_steps]<br>        return reshaped_data<br>    # Reshape the scaled X data<br>    time_steps = 1  # Adjust the number of time steps as needed<br>    X_reshaped = reshape_data(X_scale, time_steps)<br>    # Now X_reshaped has the desired three-dimensional shape: (samples, time_steps, features)<br>    # Each sample contains scaled data for a specific time window<br>    X = X_reshaped<br>    # Use the loaded model to predict on the entire dataset<br>    df_ens = df.copy() <br>    # df_ens[&#39;voting_classifier_ensembel_with_scale&#39;] = np.argmax(model.predict(X), axis=1)<br>    df_ens[&#39;voting_classifier_ensembel_with_scale&#39;] = np.argmax(model.predict(X), axis=2)<br>    df_ens[&#39;vcews&#39;] = df_ens[&#39;voting_classifier_ensembel_with_scale&#39;].shift(0).dropna().astype(int)<br>    df_ens = df_ens.dropna()<br>    # Backtesting<br>    df_ens = df_ens.reset_index(inplace=False)<br>    df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>    df_ens.set_index(&#39;Date&#39;, inplace=True)<br>    best_params = {&#39;Optimizer&#39;: &#39;Return [%]&#39;,<br>        &#39;model_trained_on&#39;: model_name,<br>        &#39;OptimizerResult_Cross&#39;: 617.5341106880867,<br>        &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_long&#39;: 24,<br>        &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_short&#39;: 24,<br>        &#39;BEST_LEVERAGE_margin_leverage&#39;: 1,<br>        &#39;TRAILING_ACTIVATE_PCT&#39;: 10,<br>        &#39;TRAILING_STOP_PCT&#39; : 5,<br>        &#39;roi_at_50&#39; : 24,<br>        &#39;roi_at_100&#39; : 20,<br>        &#39;roi_at_150&#39; : 18,<br>        &#39;roi_at_200&#39; : 15,<br>        &#39;roi_at_300&#39; : 13,<br>        &#39;roi_at_500&#39; : 10}<br>    # Define SIGNAL_3 function<br>    def SIGNAL_3(df_ens):<br>        return df_ens[&#39;vcews&#39;]<br>    # Define MyCandlesStrat_3 class<br>    class MyCandlesStrat_3(Strategy):  <br>        sl_pct_l = best_params[&#39;BEST_STOP_LOSS_sl_pct_long&#39;] <br>        tp_pct_l = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_long&#39;] <br>        limit_l = best_params[&#39;BEST_LIMIT_ORDER_limit_long&#39;] <br>        sl_pct_s = best_params[&#39;BEST_STOP_LOSS_sl_pct_short&#39;] <br>        tp_pct_s = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_short&#39;] <br>        limit_s = best_params[&#39;BEST_LIMIT_ORDER_limit_short&#39;] <br>        margin_leverage = best_params[&#39;BEST_LEVERAGE_margin_leverage&#39;]<br>        TRAILING_ACTIVATE_PCT = best_params[&#39;TRAILING_ACTIVATE_PCT&#39;]<br>        TRAILING_STOP_PCT = best_params[&#39;TRAILING_STOP_PCT&#39;]<br>        roi_at_50 = best_params[&#39;roi_at_50&#39;]<br>        roi_at_100 = best_params[&#39;roi_at_100&#39;]<br>        roi_at_150 = best_params[&#39;roi_at_150&#39;]<br>        roi_at_200 = best_params[&#39;roi_at_200&#39;]<br>        roi_at_300 = best_params[&#39;roi_at_300&#39;]<br>        roi_at_500 = best_params[&#39;roi_at_500&#39;]<br>        def init(self):<br>            super().init()<br>            self.signal1 = self.I(SIGNAL_3, self.data)<br>            self.entry_time = Timestamp.now()<br>            self.max_profit = 0<br>        def next(self):<br>            super().next() <br>            if (self.signal1 == 1):<br>                <br>                sl_price = self.data.Close[-1] * (1 - (self.sl_pct_l * 0.001))<br>                tp_price = self.data.Close[-1] * (1 + (self.tp_pct_l * 0.001))<br>                limit_price_l = tp_price * 0.994<br>                self.position.is_long<br>                self.buy(sl=sl_price, limit=limit_price_l, tp=tp_price)<br>                <br>                if self.position.is_long:<br>                    self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                <br>                # Calculate current profit<br>                # current_profit = self.trades[0].pl_pct<br>                # Check for trailing stop loss based on current profit<br>                if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                    self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                    trailing_stop_price = self.trades[0].entry_price * (1 + (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                    sl_price = min((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                    time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br>                    # Check for time interval-based selling<br>                    if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                        self.position.close()<br>            elif (self.signal1 == 2):<br>                <br>                sl_price = self.data.Close[-1] * (1 + (self.sl_pct_s * 0.001))<br>                tp_price = self.data.Close[-1] * (1 - (self.tp_pct_s * 0.001))<br>                limit_price_s = tp_price * 1.004<br>                self.position.is_short<br>                self.sell(sl=sl_price, limit=limit_price_s, tp=tp_price)<br>                <br>                if self.position.is_short:<br>                    self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                <br>                # Calculate current profit<br>                # current_profit = self.trades[0].pl_pct<br>                # Check for trailing stop loss based on current profit<br>                if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                    self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                    trailing_stop_price = self.trades[0].entry_price * (1 - (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                    sl_price = max((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                    time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br>                # Check for time interval-based selling<br>                if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                    self.position.close()<br>                elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                    self.position.close()<br>                elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                    self.position.close()<br><br>    # Run backtest<br>    bt_3 = Backtest(df_ens, MyCandlesStrat_3, cash=100000, commission=.001, margin= (1/MyCandlesStrat_3.margin_leverage), exclusive_orders=False)<br>    stat_3 = bt_3.run()<br>    print(&quot;backtest one done at 226 line - &quot;, stat_3)<br>    # custom_assets = {}<br>    if ((stat_3[&#39;Return [%]&#39;] &gt; (stat_3[&#39;Buy &amp; Hold Return [%]&#39;] * 3)) <br>        &amp; (stat_3[&#39;Profit Factor&#39;] &gt; 1.0) <br>        &amp; (stat_3[&#39;Max. Drawdown [%]&#39;] &gt; -40)<br>        &amp; (stat_3[&#39;Win Rate [%]&#39;] &gt; 55)<br>        &amp; (stat_3[&#39;Return [%]&#39;] &gt; 0)):<br>        file_prefix = file_path.split(&#39;/&#39;)[-1].split(&#39;.&#39;)[0]<br>        <br>        best_params = {&#39;Optimizer&#39;: &#39;1st backtest - Expectancy&#39;,<br>                       &#39;model_trained_on&#39;: model_name,<br>        &#39;OptimizerResult_Cross&#39;: f&quot;For {file_prefix}/USDT:USDT backtest was done from {stat_3[&#39;Start&#39;]} upto {stat_3[&#39;End&#39;]} for a duration of {stat_3[&#39;Duration&#39;]} using time frame of {timeframe} with Win Rate % - {round(stat_3[&#39;Win Rate [%]&#39;],2)}, Return % - {round(stat_3[&#39;Return [%]&#39;],3)},Expectancy % - {round(stat_3[&#39;Expectancy [%]&#39;],5)} and Sharpe Ratio - {round(stat_3[&#39;Sharpe Ratio&#39;],4)}.&quot;,<br>        &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_long&#39;: 24,<br>        &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_short&#39;: 24,<br>        &#39;BEST_LEVERAGE_margin_leverage&#39;: 1,<br>        &#39;TRAILING_ACTIVATE_PCT&#39;: 10,<br>        &#39;TRAILING_STOP_PCT&#39; : 5,<br>        &#39;roi_at_50&#39; : 24,<br>        &#39;roi_at_100&#39; : 20,<br>        &#39;roi_at_150&#39; : 18,<br>        &#39;roi_at_200&#39; : 15,<br>        &#39;roi_at_300&#39; : 13,<br>        &#39;roi_at_500&#39; : 10}<br>        key_mapping = {<br>            &#39;Optimizer&#39;: &#39;Optimizer_used&#39;,<br>            &#39;model_trained_on&#39;: &#39;model_name&#39;,<br>            &#39;OptimizerResult_Cross&#39;: &#39;Optimizer_result&#39;,<br>            &#39;BEST_STOP_LOSS_sl_pct_long&#39;: &#39;stop_loss_percent_long&#39;,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: &#39;take_profit_percent_long&#39;,<br>            &#39;BEST_LIMIT_ORDER_limit_long&#39;: &#39;limit_long&#39;,<br>            &#39;BEST_STOP_LOSS_sl_pct_short&#39;: &#39;stop_loss_percent_short&#39;,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: &#39;take_profit_percent_short&#39;,<br>            &#39;BEST_LIMIT_ORDER_limit_short&#39;: &#39;limit_short&#39;,<br>            &#39;BEST_LEVERAGE_margin_leverage&#39;: &#39;margin_leverage&#39;,<br>            &#39;TRAILING_ACTIVATE_PCT&#39;: &#39;TRAILING_ACTIVATE_PCT&#39;,<br>            &#39;TRAILING_STOP_PCT&#39; : &#39;TRAILING_STOP_PCT&#39;,<br>            &#39;roi_at_50&#39; : &#39;roi_at_50&#39;,<br>            &#39;roi_at_100&#39; : &#39;roi_at_100&#39;,<br>            &#39;roi_at_150&#39; :&#39;roi_at_150&#39;,<br>            &#39;roi_at_200&#39; : &#39;roi_at_200&#39;,<br>            &#39;roi_at_300&#39; : &#39;roi_at_300&#39;,<br>            &#39;roi_at_500&#39; : &#39;roi_at_500&#39;<br>        }<br>        custom_assets = load_custom_assets()<br>        transformed_params = {}<br>        for old_key, value in best_params.items():<br>            new_key = key_mapping.get(old_key, old_key)<br>            transformed_params[new_key] = value<br>        new_key = file_prefix + &quot;/USDT:USDT&quot;<br>        # custom_assets[new_key] = transformed_params<br>        # Update or add new entry to custom_assets<br>        if new_key in custom_assets:<br>            # Update existing entry<br>            for key, value in transformed_params.items():<br>                if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                    transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>            custom_assets[new_key].update(transformed_params)<br>        else:<br>            # Add new entry<br>            # Multiply numerical values by 0.001 for new entry if value &gt; 1<br>            for key, value in transformed_params.items():<br>                if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                    transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>            custom_assets[new_key] = transformed_params<br>        <br>        # Save custom_assets to JSON file<br>        save_custom_assets(custom_assets)<br>        print(custom_assets)<br>    else:<br>        # Optimization<br>        def optimize_strategy():<br>            # Optimization Params<br>            optimizer = &#39;Win Rate [%]&#39;<br>            stats = bt_3.optimize(<br>                sl_pct_l = range(6,100, 2), # (5,10,15,20,25,30,40,50,75,100)<br>                tp_pct_l =  range(40,100, 2), # (0.005, 0.01, 0.015, 0.02, 0.025, 0.03, 0.04, 0.05, 0.075, 0.1)<br>                # limit_l =  (4,9,14,19,24,29,39,49,74,90),#  (0.004, 0.009, 0.014, 0.019, 0.024, 0.029, 0.039, 0.049, 0.074, 0.09)<br>                sl_pct_s = range(6,100, 2),<br>                tp_pct_s =  range(40,100, 2),<br>                # limit_s =  (4,9,14,19,24,29,39,49,74,90),<br>                margin_leverage = range(1, 8),<br>                TRAILING_ACTIVATE_PCT = range(6,100,2),<br>                TRAILING_STOP_PCT = range(6,100,2),<br>                roi_at_50 = range(6,100,2),<br>                roi_at_100 = range(6,100,2),<br>                roi_at_150 = range(6,100,2),<br>                roi_at_200 = range(6,100,2),<br>                roi_at_300 = range(6,100,2),<br>                roi_at_500 = range(6,100,2),<br>                constraint=lambda p: ( (p.sl_pct_l &gt; (p.tp_pct_l) ) and <br>                                      ((p.sl_pct_s) &gt; (p.tp_pct_s)) and <br>                                      (p.roi_at_50 &gt; p.roi_at_100) and (p.roi_at_100 &gt; p.roi_at_150) and <br>                                      (p.roi_at_150 &gt; p.roi_at_200) and (p.roi_at_200 &gt; p.roi_at_300) and (p.roi_at_300 &gt; p.roi_at_500) and<br>                                     (p.TRAILING_ACTIVATE_PCT &gt; p.TRAILING_STOP_PCT)),<br>                maximize = optimizer,<br>                return_optimization=True,<br>                method = &#39;skopt&#39;,<br>                max_tries = 120 # 20% for 0.2 and 100% for 1.0, this applys when not using &#39;skopt&#39; method, for &#39;skopt&#39; number starts from 1 to 200 max epochs <br>            )<br>            # Extract the optimization results<br>            best_params = {<br>                &#39;Optimizer&#39;: optimizer,<br>                &#39;model_trained_on&#39;: model_name,<br>                &#39;OptimizerResult_Cross&#39;: stats[0][optimizer],<br>                &#39;BEST_STOP_LOSS_sl_pct_long&#39;: stats[1].x[0],<br>                &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: stats[1].x[1] ,<br>                &#39;BEST_LIMIT_ORDER_limit_long&#39;: stats[1].x[1] * 0.997,<br>                &#39;BEST_STOP_LOSS_sl_pct_short&#39;: stats[1].x[2] ,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: stats[1].x[3] ,<br>                &#39;BEST_LIMIT_ORDER_limit_short&#39;: stats[1].x[3] * 0.997,<br>                &#39;BEST_LEVERAGE_margin_leverage&#39;: stats[1].x[4],<br>                &#39;TRAILING_ACTIVATE_PCT&#39;: stats[1].x[5],<br>                &#39;TRAILING_STOP_PCT&#39; : stats[1].x[6],<br>                &#39;roi_at_50&#39; : stats[1].x[7],<br>                &#39;roi_at_100&#39; : stats[1].x[8],<br>                &#39;roi_at_150&#39; : stats[1].x[9],<br>                &#39;roi_at_200&#39; : stats[1].x[10],<br>                &#39;roi_at_300&#39; : stats[1].x[11],<br>                &#39;roi_at_500&#39; : stats[1].x[12]<br>                # &#39;BEST_STOP_LOSS_sl_pct_long&#39;: stats._strategy.sl_pct_l,<br>                # &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: stats._strategy.tp_pct_l,<br>                # &#39;BEST_LIMIT_ORDER_limit_long&#39;: stats._strategy.tp_pct_l * 0.998,<br>                # &#39;BEST_STOP_LOSS_sl_pct_short&#39;: stats._strategy.sl_pct_s,<br>                # &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: stats._strategy.tp_pct_s,<br>                # &#39;BEST_LIMIT_ORDER_limit_short&#39;: stats._strategy.sl_pct_s * 0.998,<br>                # &#39;BEST_LEVERAGE_margin_leverage&#39;: stats._strategy.margin_leverage<br>            }<br>            <br>            return best_params<br><br>        # Obtain best parameters<br>        best_params = optimize_strategy()<br>        print(&quot;best_params line 322 &quot;, best_params)<br>        if best_params:<br>            print(best_params)<br>        else:<br>            best_params = {&#39;Optimizer&#39;: &#39;Return [%]&#39;,<br>                           &#39;model_trained_on&#39;: model_name,<br>            &#39;OptimizerResult_Cross&#39;: 617.5341106880867,<br>            &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 0.025,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 0.025,<br>            &#39;BEST_LIMIT_ORDER_limit_long&#39;: 0.024,<br>            &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 0.025,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 0.025,<br>            &#39;BEST_LIMIT_ORDER_limit_short&#39;: 0.024,<br>            &#39;TRAILING_ACTIVATE_PCT&#39;: 10,<br>            &#39;TRAILING_STOP_PCT&#39; : 5,<br>            &#39;roi_at_50&#39; : 24,<br>            &#39;roi_at_100&#39; : 2,<br>            &#39;roi_at_150&#39; : 18,<br>            &#39;roi_at_200&#39; : 15,<br>            &#39;roi_at_300&#39; : 13,<br>            &#39;roi_at_500&#39; : 10}<br>        # Define SIGNAL_11 function<br>        def SIGNAL_11(df_ens):<br>            return df_ens[&#39;vcews&#39;]<br>        # Define MyCandlesStrat_11 class<br>        class MyCandlesStrat_11(Strategy):  <br>            sl_pct_l = best_params[&#39;BEST_STOP_LOSS_sl_pct_long&#39;]<br>            tp_pct_l = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_long&#39;]<br>            limit_l = best_params[&#39;BEST_LIMIT_ORDER_limit_long&#39;]<br>            sl_pct_s = best_params[&#39;BEST_STOP_LOSS_sl_pct_short&#39;]<br>            tp_pct_s = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_short&#39;]<br>            limit_s = best_params[&#39;BEST_LIMIT_ORDER_limit_short&#39;]<br>            margin_leverage = best_params[&#39;BEST_LEVERAGE_margin_leverage&#39;]<br>            TRAILING_ACTIVATE_PCT = best_params[&#39;TRAILING_ACTIVATE_PCT&#39;]<br>            TRAILING_STOP_PCT = best_params[&#39;TRAILING_STOP_PCT&#39;]<br>            roi_at_50 = best_params[&#39;roi_at_50&#39;]<br>            roi_at_100 = best_params[&#39;roi_at_100&#39;]<br>            roi_at_150 = best_params[&#39;roi_at_150&#39;]<br>            roi_at_200 = best_params[&#39;roi_at_200&#39;]<br>            roi_at_300 = best_params[&#39;roi_at_300&#39;]<br>            roi_at_500 = best_params[&#39;roi_at_500&#39;]<br>            def init(self):<br>                super().init()<br>                self.signal1 = self.I(SIGNAL_11, self.data)<br>                self.entry_time = Timestamp.now()<br>                self.max_profit = 0<br>            def next(self):<br>                super().next() <br>                if (self.signal1 == 1):<br>                    sl_price = self.data.Close[-1] * (1 - (self.sl_pct_l * 0.001))<br>                    tp_price = self.data.Close[-1] * (1 + (self.tp_pct_l * 0.001))<br>                    limit_price_l = tp_price * 0.994<br>                    self.position.is_long<br>                    self.buy(sl=sl_price, limit=limit_price_l, tp=tp_price)<br>                    if self.position.is_long:<br>                        self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                    # Calculate current profit<br>                    # current_profit = self.trades[0].pl_pct<br>                    # Check for trailing stop loss based on current profit<br>                    if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                        self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                        trailing_stop_price = self.trades[0].entry_price * (1 + (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                        sl_price = min((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                    # time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br>                    # Check for time interval-based selling<br>                    if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                        self.position.close()<br>                elif (self.signal1 == 2):<br>                    sl_price = self.data.Close[-1] * (1 + (self.sl_pct_s * 0.001))<br>                    tp_price = self.data.Close[-1] * (1 - (self.tp_pct_s * 0.001))<br>                    limit_price_s = tp_price * 1.004<br>                    self.position.is_short<br>                    self.sell(sl=sl_price, limit=limit_price_s, tp=tp_price)<br>                    if self.position.is_short:<br>                        self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                    # Calculate current profit<br>                    # current_profit = self.trades[0].pl_pct<br>                    # Check for trailing stop loss based on current profit<br>                    if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                        self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                        trailing_stop_price = self.trades[0].entry_price * (1 - (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                        sl_price = max((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                        time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br>                    # Check for time interval-based selling<br>                    if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                        self.position.close()<br><br>        # Run backtest with optimized parameters<br>        bt_11 = Backtest(df_ens, MyCandlesStrat_11, cash=100000, commission=.001, margin=(1 / MyCandlesStrat_11.margin_leverage), exclusive_orders=False)<br>        stat_11 = bt_11.run()<br>        print(&quot;stat_11 line 388 - &quot;, stat_11)<br>        # Additional processing for custom_assets<br>        # custom_assets = {}<br>        if ((stat_11[&#39;Return [%]&#39;] &gt; (stat_11[&#39;Buy &amp; Hold Return [%]&#39;] * 3)) <br>            &amp; (stat_11[&#39;Profit Factor&#39;] &gt; 1.0)<br>            &amp; (stat_11[&#39;Max. Drawdown [%]&#39;] &gt; -35)<br>            &amp; (stat_11[&#39;Win Rate [%]&#39;] &gt; 52)<br>            &amp; (stat_11[&#39;Return [%]&#39;] &gt; 0)):<br>            file_prefix = file_path.split(&#39;/&#39;)[-1].split(&#39;.&#39;)[0]<br>            <br>            print(f&quot;second backtest success for {file_prefix}/USDT:USDT with Win Rate % of {stat_11[&#39;Win Rate [%]&#39;]} and with Return in % of {stat_11[&#39;Return [%]&#39;]}&quot; )<br>            <br>            <br>            best_params = {&#39;Optimizer&#39;: &#39;2nd backtest with Expectancy&#39;,<br>            # &#39;OptimizerResult_Cross&#39;: f&quot;2nd backtest, Sharpe Ratio - {stat_11[&#39;Sharpe Ratio&#39;]}, Returns % - {stat_11[&#39;Return [%]&#39;]}, Win Rate % - {stat_11[&#39;Win Rate [%]&#39;]}&quot;,<br>                           &#39;model_trained_on&#39;: model_name,<br>            &#39;OptimizerResult_Cross&#39;: f&quot;For {file_prefix}/USDT:USDT backtest was done from {stat_11[&#39;Start&#39;]} upto {stat_11[&#39;End&#39;]} for a duration of {stat_11[&#39;Duration&#39;]} using time frame of {timeframe} with Win Rate % - {round(stat_11[&#39;Win Rate [%]&#39;],2)}, Return % - {round(stat_11[&#39;Return [%]&#39;],3)}, Expectancy % - {round(stat_11[&#39;Expectancy [%]&#39;],5)} and Sharpe Ratio - {round(stat_11[&#39;Sharpe Ratio&#39;],3)}.&quot;,<br>            &#39;BEST_STOP_LOSS_sl_pct_long&#39;: MyCandlesStrat_11.sl_pct_l.tolist(),<br>            &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: MyCandlesStrat_11.tp_pct_l.tolist(),<br>            &#39;BEST_LIMIT_ORDER_limit_long&#39;:  round(MyCandlesStrat_11.tp_pct_l.tolist() * 0.996, 2),<br>            &#39;BEST_STOP_LOSS_sl_pct_short&#39;: MyCandlesStrat_11.sl_pct_s.tolist(),<br>            &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: MyCandlesStrat_11.tp_pct_s.tolist(),<br>            &#39;BEST_LIMIT_ORDER_limit_short&#39;: round(MyCandlesStrat_11.sl_pct_s.tolist() * 0.996,2),<br>            &#39;BEST_LEVERAGE_margin_leverage&#39;: MyCandlesStrat_11.margin_leverage.tolist(),<br>            &#39;TRAILING_ACTIVATE_PCT&#39;: MyCandlesStrat_11.TRAILING_ACTIVATE_PCT.tolist(),<br>            &#39;TRAILING_STOP_PCT&#39; : MyCandlesStrat_11.TRAILING_STOP_PCT.tolist(),<br>            &#39;roi_at_50&#39; : MyCandlesStrat_11.roi_at_50.tolist(),<br>            &#39;roi_at_100&#39; : MyCandlesStrat_11.roi_at_100.tolist(),<br>            &#39;roi_at_150&#39; :MyCandlesStrat_11.roi_at_150.tolist(),<br>            &#39;roi_at_200&#39; : MyCandlesStrat_11.roi_at_200.tolist(),<br>            &#39;roi_at_300&#39; : MyCandlesStrat_11.roi_at_300.tolist(),<br>            &#39;roi_at_500&#39; : MyCandlesStrat_11.roi_at_500.tolist()<br>                          }<br>            <br>            # print(&quot;best_params under stat_11 &quot;, best_params)<br>            key_mapping = {<br>                &#39;Optimizer&#39;: &#39;Optimizer_used&#39;,<br>                &#39;model_trained_on&#39;: &#39;model_name&#39;,<br>                &#39;OptimizerResult_Cross&#39;: &#39;Optimizer_result&#39;,<br>                &#39;BEST_STOP_LOSS_sl_pct_long&#39;: &#39;stop_loss_percent_long&#39;,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: &#39;take_profit_percent_long&#39;,<br>                &#39;BEST_LIMIT_ORDER_limit_long&#39;: &#39;limit_long&#39;,<br>                &#39;BEST_STOP_LOSS_sl_pct_short&#39;: &#39;stop_loss_percent_short&#39;,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: &#39;take_profit_percent_short&#39;,<br>                &#39;BEST_LIMIT_ORDER_limit_short&#39;: &#39;limit_short&#39;,<br>                &#39;BEST_LEVERAGE_margin_leverage&#39;: &#39;margin_leverage&#39;,<br>                &#39;TRAILING_ACTIVATE_PCT&#39;: &#39;TRAILING_ACTIVATE_PCT&#39;,<br>                &#39;TRAILING_STOP_PCT&#39; : &#39;TRAILING_STOP_PCT&#39;,<br>                &#39;roi_at_50&#39; : &#39;roi_at_50&#39;,<br>                &#39;roi_at_100&#39; : &#39;roi_at_100&#39;,<br>                &#39;roi_at_150&#39; :&#39;roi_at_150&#39;,<br>                &#39;roi_at_200&#39; : &#39;roi_at_200&#39;,<br>                &#39;roi_at_300&#39; : &#39;roi_at_300&#39;,<br>                &#39;roi_at_500&#39; : &#39;roi_at_500&#39;<br>            }<br>            # Update or add new entry to custom_assets<br>            custom_assets = load_custom_assets()<br>            <br>            transformed_params = {}<br>            for old_key, value in best_params.items():<br>                new_key = key_mapping.get(old_key, old_key)<br>                transformed_params[new_key] = value<br>            new_key = file_prefix + &quot;/USDT:USDT&quot;<br>            # custom_assets[new_key] = transformed_params<br>            if new_key in custom_assets:<br>                # Update existing entry<br>                for key, value in transformed_params.items():<br>                    if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                        transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>                custom_assets[new_key].update(transformed_params)<br>            else:<br>                # Add new entry<br>                # Multiply numerical values by 0.001 for new entry if value &gt; 1<br>                for key, value in transformed_params.items():<br>                    if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                        transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>                custom_assets[new_key] = transformed_params<br>            # Save custom_assets to JSON file<br>            save_custom_assets(custom_assets)<br>        print(&quot;custom_assets after save &quot;, custom_assets)<br>    return df, symbol_name, custom_assets<br>    # except Exception as e:<br>    #     # Print the error message<br>    #     print(f&quot;Error processing {file_path}: {e}&quot;)<br>    #     print(&quot;custom assets at error level line 361 &quot;, custom_assets)<br>    #     # Return None for both DataFrame and symbol name to indicate failure<br>    #     return None, symbol_name, custom_assets<br><br># Define a thread worker function<br>def thread_worker(file):<br>    result = process_json(file)<br>    return result<br>def main():<br>    # Get a list of all JSON files in the folder<br>    # NOTE: make sure to mention the tradingview downloaded data folder here<br>    json_files = [f&quot;./tradingview_crypto_assets_15m/{file}&quot; for file in os.listdir(&quot;./tradingview_crypto_assets_15m/&quot;) if file.endswith(&quot;.json&quot;)]<br>    # print(json_files)<br>    # Get the number of available CPU cores<br>    num_cores = os.cpu_count()<br>    # print(num_cores)<br>    # Set the max_workers parameter based on the number of CPU cores<br>    max_workers = (num_cores) if (num_cores &gt; 1) else 1  # Default to 1 if CPU count cannot be determined<br>    # max_workers = 1  # Default to 1 if CPU count cannot be determined<br>    print(&#39;max workers (Total Number of CPU cores to be used) - &#39;, max_workers)<br>    # Process JSON files in parallel using multi-core processing<br>    with ThreadPoolExecutor(max_workers=max_workers) as executor:<br>        # Submit threads for each JSON file<br>        futures = [executor.submit(thread_worker, file) for file in json_files]<br>    # Wait for all threads to complete<br>    results = [future.result() for future in futures]<br>    # Process the results as needed<br>    for result in results:<br>        if result is None:<br>            continue<br>        df, symbol_name, custom_assets = result<br>        print(f&quot;Processed {symbol_name}&quot;)<br>        print(f&#39;custom_assets &#39;, custom_assets)<br>        if custom_data:  # Check if custom_data is not None<br>            custom_assets.update(custom_data)<br>            <br># Define a function to continuously run the loop<br>def run_continuous_loop():<br>    while True:<br>        main()<br># Start the continuous loop in a separate thread<br>thread = threading.Thread(target=run_continuous_loop)<br>thread.start()</pre><pre>output:<br>max workers (Total Number of CPU cores to be used) -  4<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   85.237208<br>Equity Final [$]                  45917.74697<br>Equity Peak [$]                  119511.93047<br>Return [%]                         -54.082253<br>Buy &amp; Hold Return [%]              -27.134777<br>Return (Ann.) [%]                  -98.222272<br>Volatility (Ann.) [%]                3.390676<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -63.780594<br>Avg. Drawdown [%]                   -7.944307<br>Max. Drawdown Duration       65 days 12:15:00<br>Avg. Drawdown Duration        6 days 13:06:00<br># Trades                                  704<br>Win Rate [%]                        42.471591<br>Best Trade [%]                       7.078622<br>Worst Trade [%]                     -5.342172<br>Avg. Trade [%]                      -0.100692<br>Max. Trade Duration           0 days 16:00:00<br>Avg. Trade Duration           0 days 02:09:00<br>Profit Factor                        0.910244<br>Expectancy [%]                      -0.083294<br>SQN                                 -1.448338<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                          Size  Ent...<br>dtype: object</pre><pre>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   68.295829<br>Equity Final [$]                  78570.91076<br>Equity Peak [$]                 102494.351527<br>Return [%]                         -21.429089<br>Buy &amp; Hold Return [%]               10.347826<br>Return (Ann.) [%]                  -70.858652<br>Volatility (Ann.) [%]               51.978975<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -41.288206<br>Avg. Drawdown [%]                  -15.066559<br>Max. Drawdown Duration       71 days 23:15:00<br>Avg. Drawdown Duration       24 days 03:40:00<br># Trades                                  219<br>Win Rate [%]                        57.990868<br>Best Trade [%]                      17.226306<br>Worst Trade [%]                     -5.348487<br>Avg. Trade [%]                        0.01822<br>Max. Trade Duration           2 days 10:30:00<br>Avg. Trade Duration           0 days 09:08:00<br>Profit Factor                        1.061187<br>Expectancy [%]                        0.12345<br>SQN                                 -0.480221<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                          Size  Ent...<br>dtype: object<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   77.210836<br>Equity Final [$]                255299.429466<br>Equity Peak [$]                 263013.301626<br>Return [%]                         155.299429<br>Buy &amp; Hold Return [%]              -27.134777<br>Return (Ann.) [%]                13205.285493<br>Volatility (Ann.) [%]            18553.318471<br>Sharpe Ratio                         0.711748<br>Sortino Ratio                      269.760311<br>Calmar Ratio                       559.102219<br>Max. Drawdown [%]                  -23.618732<br>Avg. Drawdown [%]                    -2.60826<br>Max. Drawdown Duration       21 days 15:00:00<br>Avg. Drawdown Duration        0 days 17:31:00<br># Trades                                  285<br>Win Rate [%]                         68.77193<br>Best Trade [%]                       7.078622<br>Worst Trade [%]                     -5.355119<br>Avg. Trade [%]                       0.427783<br>Max. Trade Duration           2 days 05:30:00<br>Avg. Trade Duration           0 days 08:32:00<br>Profit Factor                        1.408437<br>Expectancy [%]                       0.486645<br>SQN                                  1.913964<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                           Size  En...<br>dtype: object<br>Error decoding JSON in custom_assets.txt: Expecting value: line 1 column 1 (char 0)<br>{&#39;NKN/USDT:USDT&#39;: {&#39;Optimizer_used&#39;: &#39;1st backtest - Expectancy&#39;, &#39;model_name&#39;: &#39;model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&#39;, &#39;Optimizer_result&#39;: &#39;For NKN/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 68.77, Return % - 155.299,Expectancy % - 0.48665 and Sharpe Ratio - 0.7117.&#39;, &#39;stop_loss_percent_long&#39;: 0.052, &#39;take_profit_percent_long&#39;: 0.055, &#39;limit_long&#39;: 0.054, &#39;stop_loss_percent_short&#39;: 0.052, &#39;take_profit_percent_short&#39;: 0.055, &#39;limit_short&#39;: 0.054, &#39;margin_leverage&#39;: 1, &#39;TRAILING_ACTIVATE_PCT&#39;: 0.045, &#39;TRAILING_STOP_PCT&#39;: 0.005, &#39;roi_at_50&#39;: 0.054, &#39;roi_at_100&#39;: 0.05, &#39;roi_at_150&#39;: 0.045, &#39;roi_at_200&#39;: 0.04, &#39;roi_at_300&#39;: 0.03, &#39;roi_at_500&#39;: 0.01}}<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   74.874588<br>Equity Final [$]                128738.734226<br>Equity Peak [$]                 159762.295359<br>Return [%]                          28.738734<br>Buy &amp; Hold Return [%]               -7.367375<br>Return (Ann.) [%]                  225.143219<br>Volatility (Ann.) [%]              319.059091<br>Sharpe Ratio                         0.705647<br>Sortino Ratio                        4.823065<br>Calmar Ratio                        10.486484<br>Max. Drawdown [%]                  -21.469848<br>Avg. Drawdown [%]                   -4.048638<br>Max. Drawdown Duration       13 days 09:15:00<br>Avg. Drawdown Duration        1 days 18:33:00<br># Trades                                  183<br>Win Rate [%]                        68.306011<br>Best Trade [%]                       6.186671<br>Worst Trade [%]                     -5.332174<br>Avg. Trade [%]                       0.294714<br>Max. Trade Duration           2 days 18:30:00<br>Avg. Trade Duration           0 days 11:01:00<br>Profit Factor                        1.262946<br>Expectancy [%]                       0.362138<br>SQN                                  0.587677<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                        Size  Entry...<br>dtype: object<br>{&#39;NKN/USDT:USDT&#39;: {&#39;Optimizer_used&#39;: &#39;1st backtest - Expectancy&#39;, &#39;model_name&#39;: &#39;model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&#39;, &#39;Optimizer_result&#39;: &#39;For NKN/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 68.77, Return % - 155.299,Expectancy % - 0.48665 and Sharpe Ratio - 0.7117.&#39;, &#39;stop_loss_percent_long&#39;: 0.052, &#39;take_profit_percent_long&#39;: 0.055, &#39;limit_long&#39;: 0.054, &#39;stop_loss_percent_short&#39;: 0.052, &#39;take_profit_percent_short&#39;: 0.055, &#39;limit_short&#39;: 0.054, &#39;margin_leverage&#39;: 1, &#39;TRAILING_ACTIVATE_PCT&#39;: 0.045, &#39;TRAILING_STOP_PCT&#39;: 0.005, &#39;roi_at_50&#39;: 0.054, &#39;roi_at_100&#39;: 0.05, &#39;roi_at_150&#39;: 0.045, &#39;roi_at_200&#39;: 0.04, &#39;roi_at_300&#39;: 0.03, &#39;roi_at_500&#39;: 0.01}, &#39;NEO/USDT:USDT&#39;: {&#39;Optimizer_used&#39;: &#39;1st backtest - Expectancy&#39;, &#39;model_name&#39;: &#39;model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&#39;, &#39;Optimizer_result&#39;: &#39;For NEO/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 68.31, Return % - 28.739,Expectancy % - 0.36214 and Sharpe Ratio - 0.7056.&#39;, &#39;stop_loss_percent_long&#39;: 0.052, &#39;take_profit_percent_long&#39;: 0.055, &#39;limit_long&#39;: 0.054, &#39;stop_loss_percent_short&#39;: 0.052, &#39;take_profit_percent_short&#39;: 0.055, &#39;limit_short&#39;: 0.054, &#39;margin_leverage&#39;: 1, &#39;TRAILING_ACTIVATE_PCT&#39;: 0.045, &#39;TRAILING_STOP_PCT&#39;: 0.005, &#39;roi_at_50&#39;: 0.054, &#39;roi_at_100&#39;: 0.05, &#39;roi_at_150&#39;: 0.045, &#39;roi_at_200&#39;: 0.04, &#39;roi_at_300&#39;: 0.03, &#39;roi_at_500&#39;: 0.01}}<br><br><br><br>.................................................................................................................................<br>(output goes on for all the assets and then short listed assets get saved inside custom_assets.txt)</pre><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>The provided Python code appears to be related to backtesting a cryptocurrency trading strategy. Here’s a breakdown of the code functionalities:</p><p><strong>Data Processing:</strong></p><ol><li><strong>Function </strong><strong>process_json:</strong> This function reads a JSON file containing cryptocurrency price data.</li><li><strong>Data Cleaning and Transformation:</strong> It cleans and transforms the data by:</li></ol><ul><li>Renaming columns to standard names (e.g., ‘date’ to ‘Date’).</li><li>Converting the ‘Date’ column to datetime format.</li><li>Setting ‘Date’ as the index.</li><li>Filling missing values in the ‘Close’ column with the previous close price.</li><li>Extracting the symbol name from the ‘symbol’ column.</li></ul><ol><li><strong>Technical Indicator Calculation:</strong> The script calculates various technical indicators like ATR, EMA, RSI, etc., using the ta library (assumed to be imported).</li><li><strong>Feature Engineering:</strong> It creates additional features like returns, volatility, volume-based indicators, and momentum-based indicators.</li><li><strong>Data Scaling:</strong> The script scales the data using MinMaxScaler for better model performance during backtesting.</li><li><strong>Reshaping Data:</strong> The data is reshaped into a format suitable for the trading strategy (e.g., sequences of past price data).</li></ol><p><strong>Backtesting Strategy:</strong></p><ol><li><strong>Function </strong><strong>SIGNAL_3:</strong> This function likely defines the trading signals based on some criteria (not shown in the provided code).</li><li><strong>Class </strong><strong>MyCandlesStrat_3:</strong> This class defines the trading strategy using the Backtrader library (assumed to be imported). Key elements include:</li></ol><ul><li><strong>Stop-loss and Take-profit:</strong> These are set based on predefined percentages (BEST_STOP_LOSS_sl_pct_long, etc.) for long and short positions.</li><li><strong>Limit orders:</strong> These are used to ensure order execution within a specific price range.</li><li><strong>Trailing Stop-loss:</strong> The stop-loss is dynamically adjusted based on current profit to lock in gains.</li><li><strong>Time-based profit taking:</strong> Profits are automatically locked in after a certain time holding the asset.</li><li><strong>Leverage:</strong> The strategy uses a predefined leverage multiplier (BEST_LEVERAGE_margin_leverage).</li></ul><p><strong>Backtesting and Analysis:</strong></p><ol><li><strong>Backtest:</strong> The script performs a backtest on the processed data using the MyCandlesStrat_3 strategy with a starting capital of 100000.</li><li><strong>Performance Metrics:</strong> Backtesting results likely include various performance metrics like returns, Sharpe Ratio, Win Rate, and Drawdown (not explicitly shown in the provided code).</li></ol><p><strong>Conditional Logic:</strong></p><ul><li>The script checks if certain performance conditions are met (high return, good profit factor, etc.).</li><li>If the conditions are satisfied, the script potentially saves the trading strategy parameters for this specific asset.</li></ul><p>Usage of ThreadPoolExecutor class for parallel processing of JSON files. Here&#39;s a breakdown of its functionality:</p><p><strong>1. Thread Worker Function (</strong><strong>thread_worker):</strong></p><ul><li>This function takes a single JSON file path as input (file).</li><li>It calls the process_json function (assumed to be defined elsewhere) to process the JSON data.</li><li>It returns the processed result, likely a Pandas DataFrame (df), symbol name (symbol_name), and potentially other custom data (custom_assets).</li></ul><p><strong>2. Main Function (</strong><strong>main):</strong></p><ul><li>It retrieves a list of all JSON files within a specified folder (./tradingview_crypto_assets_15m/).</li><li>It determines the number of available CPU cores using os.cpu_count().</li><li>It sets the max_workers parameter for the ThreadPoolExecutor based on the CPU cores (using all cores if available, defaulting to 1 otherwise).</li><li>It prints the number of cores to be used for processing.</li><li>It creates a ThreadPoolExecutor with the determined max_workers.</li><li>It iterates through the list of JSON files and submits each file path to the thread pool using executor.submit(thread_worker, file). This creates tasks for each file to be processed concurrently.</li><li>It waits for all submitted tasks (futures) to complete using future.result() and stores the results in a list (results).</li><li>It iterates through the processing results:</li><li>If a result is None, it skips to the next iteration (potentially handling errors).</li><li>Otherwise, it unpacks the result (df, symbol_name, and potentially custom_assets).</li><li>It prints information about the processed symbol and the custom assets (if any).</li><li>It conditionally updates custom_assets with additional custom data (custom_data) if it exists (logic not entirely shown).</li></ul><p><strong>3. Continuous Loop Function (</strong><strong>run_continuous_loop):</strong></p><ul><li>This function defines an infinite loop (while True).</li><li>Inside the loop, it calls the main function, presumably to process a batch of JSON files repeatedly.</li></ul><p><strong>4. Starting the Loop:</strong></p><ul><li>The code creates a separate thread using threading.Thread and sets its target to the run_continuous_loop function.</li><li>Finally, it starts the thread, initiating the continuous processing loop.</li></ul><p><strong>Overall, this code snippet demonstrates parallel processing of JSON files using a thread pool based on CPU cores. The loop continuously processes batches of files.</strong></p><p><strong>The code demonstrates a framework for backtesting a cryptocurrency trading strategy that uses technical indicators and incorporates risk management techniques like stop-loss and trailing stop-loss.</strong></p><p><strong>Disclaimer:</strong></p><ul><li>Always remeber that, Backtesting results may not be indicative of future performance.</li><li>Trading cryptocurrencies involves significant risks, and you should always do your own research before making any investment decisions.</li></ul><h4>custom_assets.txt Output:</h4><pre>{<br>    &quot;NKN/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For NKN/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 68.77, Return % - 155.299,Expectancy % - 0.48665 and Sharpe Ratio - 0.7117.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;NEO/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For NEO/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 68.31, Return % - 28.739,Expectancy % - 0.36214 and Sharpe Ratio - 0.7056.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;AUDIO/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For AUDIO/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 67.05, Return % - 87.616,Expectancy % - 0.66847 and Sharpe Ratio - 0.5269.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;ENA/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For ENA/USDT:USDT backtest was done from 2024-04-04 00:15:00 upto 2024-05-14 04:00:00 for a duration of 40 days 03:45:00 using time frame of 15m with Win Rate % - 70.0, Return % - 6.486,Expectancy % - 0.67719 and Sharpe Ratio - 0.3738.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    }<br>.................................... <br>(all 30+assets got short listed as per paramter given by us during <br>optimization and backtesting with downlaoded data for the neural <br>network model we trained our model on)<br></pre><p>The provided data snippet appears to be the results of backtesting a cryptocurrency trading strategy on multiple assets. Here’s a breakdown of the information:</p><p><strong>Structure:</strong></p><ul><li>It’s a dictionary with currency pairs (e.g., “MATIC/USDT:USDT”) as keys.</li></ul><p><strong>Content for Each Asset:</strong></p><ul><li><strong>Optimizer_used:</strong> This specifies the optimization method used for backtesting (here, “1st backtest — Expectancy”).</li><li><strong>model_name:</strong> This indicates the model name used for the trading strategy (“model_lstm_Balanced__15m_ETH_SL55_TP55_ShRa_0.71_time_20240529033537.keras”).</li><li><strong>Optimizer_result:</strong> This is a detailed description of the backtesting results for the specific asset. It includes:</li><li>Start and end date of the backtest.</li><li>Backtesting duration.</li><li>Timeframe used (e.g., 15m).</li><li>Win Rate percentage.</li><li>Return percentage.</li><li>Expectancy percentage.</li><li>Sharpe Ratio.</li><li><strong>stop_loss_percent_long/short:</strong> These define the stop-loss percentages for long and short positions.</li><li><strong>take_profit_percent_long/short:</strong> These define the take-profit percentages for long and short positions.</li><li><strong>limit_long/short:</strong> These define the maximum price deviation allowed for entry orders (likely to prevent excessive slippage).</li><li><strong>margin_leverage:</strong> This specifies the leverage used for margin trading (set to 1 here, indicating no leverage).</li><li><strong>TRAILING_ACTIVATE_PCT &amp; TRAILING_STOP_PCT:</strong> These define parameters for trailing stop-loss, which adjusts the stop-loss dynamically.</li><li><strong>roi_at_50, 100, 150, etc.:</strong> These are potentially profit targets at different holding durations (e.g., roi_at_50 might be the target profit for holding 50% of the time).</li></ul><p><strong>Interpretation:</strong></p><ul><li>This data likely comes from a backtesting tool that evaluated a specific trading strategy on various cryptocurrencies.</li><li>The results show performance metrics like win rate, return, and Sharpe Ratio for each asset.</li><li>Stop-loss, take-profit, and leverage parameters define the risk management aspects of the strategy.</li></ul><p><strong>Shortlisted Assets and Saving:</strong></p><ul><li>The statement mentions “shortlisted assets” but doesn’t explicitly show how they are identified. It’s possible that assets meeting certain performance criteria (based on the backtesting results) are considered shortlisted.</li><li>These shortlisted assets are potentially saved in a file named “saved_assets.txt” in the same format as the provided data snippet.</li></ul><p><strong>Disclaimer:</strong></p><ul><li>Backtesting results are not a guarantee of future performance.</li><li>Trading cryptocurrencies involves significant risks, and you should always do your own research before making any investment decisions.</li></ul><h3>Conclusion:</h3><p>The LSTM time series classification model has proven to be a valuable tool for predicting Ethereum price movements and generating profitable trading signals. Despite the high volatility and substantial drawdowns, the model’s robust returns and positive expectancy demonstrate its potential in algorithmic trading.</p><p>However, there are areas for further improvement and optimization:</p><ol><li><strong>Risk Management</strong>: Implementing advanced risk management techniques could help mitigate drawdowns and volatility.</li><li><strong>Model Optimization</strong>: Continuous refinement of the LSTM model’s architecture and hyperparameters can enhance performance.</li><li><strong>Broader Application</strong>: Extending the model to other assets and timeframes could provide additional insights and opportunities.</li><li><strong>Live Trading</strong>: Testing the strategy in a live trading environment would provide practical insights and validate its real-world applicability.</li></ol><p>Overall, the LSTM model’s ability to capture complex patterns in time series data makes it a powerful tool for trading strategies, with significant potential for generating high returns.</p><p>This article describes a cryptocurrency trading system that utilizes a neural network model (specifically a LSTM model) and a trading bot called VishvaAlgo.</p><p>Here’s a breakdown:</p><p><strong>Data and Model Training:</strong></p><ul><li>The system downloads historical data for over 250+ cryptocurrency assets on Binance Futures from TradingView.</li><li>It trains a LSTM-based neural network model, achieving a claimed return of 700%+ on Ethereum (ETHUSDT) in 3 years on a 15-minutes time frame data with over 100,000 rows trained model with 193+ features used for finding the best possible estimation for going neutral, long and short using the classification based neural network LSTM model. (<strong>important to note: this returns vary from system to system based on trained data and needs re-verification</strong>).</li></ul><p><strong>Hyperparameter Optimization and Asset Selection:</strong></p><ul><li>The system uses Hyperopt (a hyperparameter optimization library) to identify the most suitable assets for the trained model among the downloaded data.</li><li>Each shortlisted asset has a unique set of parameters like stop-loss, take-profit, leverage, tailored for the model’s predictions.</li></ul><p><strong>VishvaAlgo — The Trading Bot:</strong></p><ul><li>VishvaAlgo helps automate live trading using the trained model and the shortlisted assets with their pre-defined parameters.</li><li>The bot offers easy integration with various neural network models for classification.</li><li>A video explaining VishvaAlgo’s features and benefits is available <strong><em>— </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></li></ul><p><strong>Benefits of VishvaAlgo:</strong></p><ul><li>Automates trading based on the trained model and optimized asset selection.</li><li>Offers easy integration with user-defined neural network models.</li><li>Provided detailed explanation and installation guide for purchase through my Patreon page.</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FKWAvZraD5aM%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DKWAvZraD5aM&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FKWAvZraD5aM%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/fa4c736694b0d947204a89e359dce943/href">https://medium.com/media/fa4c736694b0d947204a89e359dce943/href</a></iframe><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p><strong><em>Disclaimer:</em></strong><em> Trading involves risk. Past performance is not indicative of future results. VishvaAlgo is a tool to assist traders and does not guarantee profits. Please trade responsibly and conduct thorough research before making investment decisions.</em></p><p>Warm Regards,</p><p><strong>Puranam Pradeep Picasso</strong></p><p><strong>Linkedin</strong> — <a href="https://www.linkedin.com/in/puranampradeeppicasso/">https://www.linkedin.com/in/puranampradeeppicasso/</a></p><p><strong>Patreon </strong>— <a href="https://patreon.com/pppicasso">https://patreon.com/pppicasso</a></p><p><strong>Facebook </strong>— <a href="https://www.facebook.com/puranam.p.picasso/">https://www.facebook.com/puranam.p.picasso/</a></p><p><strong>Twitter</strong> — <a href="https://twitter.com/picasso_999">https://twitter.com/picasso_999</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6229f941b823" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[9,883+% Returns in 3 years on Cryptocurrency using 2D Convolutional Neural Network (CNN) Model and…]]></title>
            <link>https://imbuedeskpicasso.medium.com/9-883-returns-in-3-years-on-cryptocurrency-using-2d-convolutional-neural-network-cnn-model-and-2105ee7e2893?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/2105ee7e2893</guid>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[neural-networks]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[algorithmic-trading]]></category>
            <category><![CDATA[crypto]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Fri, 21 Jun 2024 16:33:24 GMT</pubDate>
            <atom:updated>2024-06-21T16:33:24.467Z</atom:updated>
            <content:encoded><![CDATA[<h3>9,883+% Returns in 3 years on Cryptocurrency using 2D Convolutional Neural Network (CNN) Model and short listing Best Assets for Trading — VishvaAlgo Machine Learning Trading Bot</h3><p>Unleashing the power of Neural Networks for creating Trading Bot for maximum profits.</p><h3>Introduction:</h3><p>Welcome to the world of algorithmic trading and machine learning, where innovation meets profitability. Over the past three years, I’ve dedicated myself to developing algorithmic trading systems that harness the power of various strategies. Through relentless experimentation and refinement, I’ve achieved impressive returns across multiple strategies, delighting members of<a href="https://www.patreon.com/pppicasso"><strong><em> my Patreon community with consistent profits</em></strong></a>.</p><p>In the pursuit of excellence, I recently launched <a href="https://www.patreon.com/pppicasso/shop"><strong><em>VishvaAlgo, a machine learning-based algorithmic trading system that leverages neural network classification models</em></strong></a><strong><em>.</em></strong> This cutting-edge platform has already demonstrated remarkable results, delivering exceptional returns to traders in the cryptocurrency market. Through a series of articles and practical demonstrations, I’ve shared insights on transitioning from traditional algorithmic trading to deploying practical machine learning models, showcasing their effectiveness in real-world trading environments.</p><p>In this article, we delve into the trans-formative potential of algorithmic trading and machine learning, focusing on the effectiveness of neural networks, specifically the Convolutional neural networks (CNNs) models. Building upon our past successes, we set out to demonstrate the remarkable profitability achievable with advanced machine learning models, using Bitcoin (BTC) and Ethereum (ETH) as our primary assets.</p><p>Our analysis focuses on Ethereum pricing in USDT, utilizing 15-minute candlestick data spanning from January 1st, 2021, to October 22nd, 2023, comprising over 97,000 rows of data and more than 190 features. By leveraging neural network models for prediction, we aim to identify optimal long and short positions, showcasing the potential of deep learning in financial markets.</p><blockquote>Our story is one of relentless innovation, fueled by a burning desire to unlock the full potential of Deep Learning in the pursuit of profit. In this article, we invite you to join us as we unravel the exciting tale of our transformation from humble beginnings to groundbreaking success.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*D7aGS10HFf-DRV2G.png" /><figcaption>CNN classification model for crypto algo trading</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*d1LPF-ZH0ENthxX-.png" /><figcaption>CNN classification model for crypto algo trading</figcaption></figure><h3>Our Algorithmic Trading Vs/+ Machine Learning Vs/+ Deep Learning Journey so far?</h3><h4>Stage 1:</h4><p>We have developed a crypto Algorithmic Strategy which gave us huge profits when ran on multiple crypto assets (138+) with a profit range of 8787%+ in span of 3 years (almost).</p><h4>“The 8787%+ ROI Algo Strategy Unveiled for Crypto Futures! Revolutionized With Famous RSI, MACD, Bollinger Bands, ADX, EMA” — <a href="https://imbuedeskpicasso.medium.com/the-8787-roi-algo-strategy-unveiled-for-crypto-futures-22a5dd88c4a5">Link</a></h4><p>We have run live trading in dry-run mode for the same for 7 days and details about the same have been shared in another article.</p><h4>“Freqtrade Revealed: 7-Day Journey in Algorithmic Trading for Crypto Futures Market” — <a href="https://imbuedeskpicasso.medium.com/freqtrade-revealed-7-day-journey-in-algorithmic-trading-for-crypto-futures-market-1032c409d6bd">Link</a></h4><p>After<strong> successful backtest results and forward testing</strong> (live trading in dry-run mode), we planned to improve the odds of making more profit for the same. (To lower stop-losses, increase odds of winning more , reduce risk factor and other important things)</p><h4>Stage 2:</h4><p>We have worked on developing a strategy alone without freqtrade setup (avoiding trailing stop loss, multiple asst parallel running, higher risk management setups that freqtrade provides for free (it is a free open source platform) and then tested it in market, then optimized it using hyper parameters and then , we got some +ve profits from the strategy</p><h4>“How I achieved 3000+% Profit in Backtesting for Various Algorithmic Trading Bots and how you can do the same for your Trading Strategies — Using Python Code” — <a href="https://medium.com/p/b1de0d20cd39">Link</a></h4><h4>Stage 3:</h4><p>As we have tested our strategy only on 1 Asset , i.e; BTC/USDT in crypto market, we wanted to know if we can segregate the whole collective assets we have (Which we have used for developing Freqtrade Strategy earlier) segregate them into different clusters based on their volatility, it becomes easy to do trading for certain volatile assets and won’t hit huge stop-losses for others if worked on implementing based on coin volatility.</p><p>We used <strong>K-nearest Neighbors (KNN Means)</strong> to identify different clusters of assets out of 138 crypto assets we use in our freqtrade strategy, which gave us 8<strong>000+% profits</strong> during backtest.</p><h4>“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -1 (K-Nearest Neighbors)” — <a href="https://medium.com/p/0c143a6ab7cb">Link</a></h4><h4>Stage 4:</h4><p>Now, we want to introduce Unsupervised Machine Learning model — Hidden Markov Model (HMMs) to identify trends in the market and trade during only profitable trends and avoid sudden pumps, dumps in market, avoid negative trends in market. Below explanation unravels the same.</p><h4>“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -2 (Hidden Markov Model — HMM)” — <a href="https://imbuedeskpicasso.medium.com/hyper-optimized-algorithmic-strategy-vs-machine-learning-models-part-2-hidden-markov-model-98e4894e3d9e">Link</a></h4><h4>Stage 5:</h4><p>I worked on using XGBoost Classifier to identify long and short trades using our old signal. Before using it, we ensured that the signal algorithm we had previously developed was hyper-optimized. Additionally, we introduced different stop-loss and take-profit parameters for this setup, causing the target values to change accordingly. We also adjusted the parameters used for obtaining profitable trades based on the stop-loss and take-profit values. Later, we tested the basic XGBClassifier setup and then enhanced the results by adding re-sampling methods. Our target classes, which include 0’s (neutral), 1’s (for long trades), and 2’s (for short trades), were imbalanced due to the trade execution timing. To address this imbalance, we employed re-sampling methods and performed hyper-optimization of the classifier model. Subsequently, we evaluated if the model performed better with other classifier models such as SVC, CatBoost, and LightGBM, in combination with LSTM and XGBoost. Finally, we concluded by analyzing the results and determining feature importance parameters to identify the most productive features.</p><h4>“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -3 (XGBoost Classifier , LGBM Classifier, CatBoost Classifier, SVC, LSTM with XGB and Multi level Hyper-optimization)” — <a href="https://imbuedeskpicasso.medium.com/hyper-optimized-algorithmic-strategy-vs-machine-learning-models-part-3-xgboost-classifier-6c4f49c58800">Link</a></h4><h4>Stage 6:</h4><p>In that stage, I utilized the CatBoostClassifier along with resampling and sample weights. I incorporated multiple time frame indicators such as volume, momentum, trend, and volatility into my model. After running the model, I performed ensembling techniques to enhance its overall performance. The results of my analysis showed a significant increase in profit from 54% to over 4600% during backtesting. Additionally, I highlighted the impressive performance metrics including recall, precision, accuracy, and F1 score, all exceeding 80% for each of the three trading classes (0 for neutral, 1 for long, and 2 for short trades).</p><h4>“From 54% to a Staggering 4648%: Catapulting Cryptocurrency Trading with CatBoost Classifier, Machine Learning Model at Its Best” — <a href="https://imbuedeskpicasso.medium.com/from-54-to-a-staggering-4648-catapulting-cryptocurrency-trading-with-catboost-classifier-75ac9f10c8fc">Link</a></h4><h4>Stage 7:</h4><p>In this stage, the <strong><em>ensemble method combining TCN and LSTM neural network models</em></strong> has demonstrated exceptional performance across various datasets, outperforming individual models and even surpassing buy and hold strategies. This underscores the effectiveness of ensemble learning in improving prediction accuracy and robustness.</p><h4>“Bitcoin/BTC 4750%+ , Etherium/ETH 11,270%+ profit in 1023 days using Neural Networks, Algorithmic Trading Vs/+ Machine Learning Models Vs/+ Deep Learning Model Part — 4 (TCN, LSTM, Transformer with Ensemble Method)” — <a href="https://medium.com/p/d5a644cdc36f/">Link</a></h4><h4>Stage 8:</h4><p>Experience the future of trading with VishvaAlgo v3.8. With its advanced features, unparalleled risk management capabilities, and ease of integration of ML and neural network models, VishvaAlgo is the ultimate choice for traders seeking consistent profits and peace of mind. Don’t miss out on this opportunity to revolutionize your trading journey.</p><blockquote><strong><em>Purchase Link:</em></strong><em> </em><a href="https://www.patreon.com/pppicasso/shop/vishvaalgo-v3-0-live-crypto-trading-170240?source=storefront">VishvaAlgo V3.8 Live Crypto Trading Using Machine Learning Model</a></blockquote><h4>“VishvaAlgo v3.0 — Revolutionize Your Live Cryptocurrency Trading system Enhanced with Machine Learning (Neural Network) Model. Live Profits Screenshots Shared” — <a href="https://medium.com/p/f4ca0facae7e/">Link</a></h4><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><h3>Introduction to CNNs and Time Series Classification</h3><p><strong>Convolutional Neural Networks (CNNs)</strong> are primarily designed for processing grid-like data, such as images and videos, where they have proven highly effective. CNNs work by applying convolutional filters that scan the input data and learn spatial hierarchies of features. Typically used in image and video processing, CNNs can also be adapted for time series data due to their ability to capture local patterns and trends.</p><h3>CNNs for Image and Video Processing</h3><ul><li><strong>Image Processing</strong>: CNNs are used to identify objects, detect faces, and recognize patterns within images. They work by applying a series of convolutional layers, pooling layers, and fully connected layers to extract features and make predictions.</li><li><strong>Video Processing</strong>: In video data, CNNs can be used for tasks like action recognition and video classification. They process each frame as an image and can use temporal layers to capture the sequence of frames.</li></ul><h3>CNNs for Time Series Data</h3><p>Despite their typical use in image and video processing, CNNs can be highly effective for time series classification. Here’s how:</p><ul><li><strong>Feature Extraction</strong>: CNNs can extract temporal features from time series data, identifying patterns such as trends, seasonality, and anomalies.</li><li><strong>Local Pattern Recognition</strong>: The convolutional filters can capture local patterns within the time series data, which is crucial for financial data where short-term trends and fluctuations matter.</li><li><strong>Dimensionality Reduction</strong>: Pooling layers can reduce the dimensionality of the data, retaining the most important features while reducing computational complexity.</li></ul><h3>2D CNN for Multi-Class Classification on ETH Data</h3><h4>Data Description</h4><ul><li><strong>Asset</strong>: Ethereum (ETH)</li><li><strong>Time Frame</strong>: 15-minute intervals</li><li><strong>Rows</strong>: Over 100,000</li><li><strong>Features</strong>: 193+ (e.g., OHLCV, technical indicators)</li></ul><h4>Model Architecture and Training</h4><ol><li><strong>Input Shape</strong>: The input data for the model is structured in a 2D format, with the shape (number of samples, number of timesteps, number of features).</li><li><strong>Convolutional Layers</strong>: These layers apply convolutional filters across the input data to learn local temporal features.</li></ol><p>x = Conv2D(filters=64, kernel_size=(3, 3), activation=&#39;relu&#39;)(inputs)</p><p><strong>3. Pooling Layers</strong>: These layers reduce the dimensionality of the feature maps while retaining important information.</p><p>x = MaxPooling2D(pool_size=(2, 2))(x)</p><p><strong>4. Dense Layers</strong>: Fully connected layers interpret the features extracted by the convolutional layers.</p><p>x = Flatten()(x) x = Dense(units=128, activation=&#39;relu&#39;)(x)</p><p><strong>5. Output Layer</strong>: The final layer uses a softmax activation function to output probabilities for each class (neutral, long, short).</p><p>outputs = Dense(3, activation=&#39;softmax&#39;)(x)</p><h4>Training the Model</h4><p>The model is trained using labeled time series data, where each segment of the time series is labeled as 0 (neutral), 1 (long), or 2 (short).</p><pre>model.compile(optimizer=&#39;adam&#39;, loss=&#39;categorical_crossentropy&#39;, metrics=[&#39;accuracy&#39;])<br>model.fit(X_train, y_train, epochs=50, batch_size=64, validation_split=0.2)</pre><h4>Example Code</h4><p>Here is a complete example of a 2D CNN for multi-class classification of ETH time series data:</p><pre>import numpy as np<br>import keras<br>from keras.models import Model<br>from keras.layers import Input, Conv2D, MaxPooling2D, Flatten, Dense, Dropout<br>from keras.utils import to_categorical<br>from sklearn.model_selection import train_test_split<br><br># Simulate data<br>X = np.random.rand(100000, 15, 193, 1)  # 100,000 samples, 15 timesteps, 193 features, 1 channel<br>y = np.random.randint(3, size=100000)   # 100,000 labels (0, 1, 2)<br># Convert labels to one-hot encoding<br>y = to_categorical(y, num_classes=3)<br># Split data into training and validation sets<br>X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)<br># Build model<br>inputs = Input(shape=(15, 193, 1))<br>x = Conv2D(filters=64, kernel_size=(3, 3), activation=&#39;relu&#39;)(inputs)<br>x = MaxPooling2D(pool_size=(2, 2))(x)<br>x = Flatten()(x)<br>x = Dense(units=128, activation=&#39;relu&#39;)(x)<br>x = Dropout(0.5)(x)<br>outputs = Dense(3, activation=&#39;softmax&#39;)(x)<br>model = Model(inputs=inputs, outputs=outputs)<br>model.compile(optimizer=&#39;adam&#39;, loss=&#39;categorical_crossentropy&#39;, metrics=[&#39;accuracy&#39;])<br># Train model<br>model.fit(X_train, y_train, epochs=10, batch_size=64, validation_data=(X_val, y_val))</pre><h3>Effectiveness of CNNs for Time Series Classification</h3><ul><li><strong>Pattern Recognition</strong>: CNNs can effectively capture and recognize patterns in time series data, making them suitable for financial market analysis.</li><li><strong>Speed</strong>: CNNs can process large datasets efficiently, which is crucial for high-frequency trading strategies.</li><li><strong>Accuracy</strong>: When properly tuned, CNNs can achieve high accuracy in predicting market movements, helping traders make informed decisions.</li></ul><p>In summary, while CNNs are traditionally used for image and video processing, their ability to capture local patterns and reduce dimensionality makes them highly effective for time series classification in financial markets. This adaptability allows for robust models that can predict market movements, such as those in the cryptocurrency market, aiding in the classification of positions like long, short, and neutral.</p><h3>The Whole code Explanation:</h3><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><pre># Remove Future Warnings<br>import warnings<br>warnings.simplefilter(action=&#39;ignore&#39;, category=FutureWarning)<br><br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br># General<br>import numpy as np<br># Data Management<br>import pandas as pd<br># Machine Learning<br>from catboost import CatBoostClassifier<br>from sklearn.model_selection import train_test_split<br>from sklearn.model_selection import RandomizedSearchCV, cross_val_score<br>from sklearn.model_selection import RepeatedStratifiedKFold<br>from sklearn.linear_model import LogisticRegression<br># ensemble<br>from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier<br>from sklearn.ensemble import StackingClassifier<br>from sklearn.ensemble import VotingClassifier<br>#Sampling Methods<br>from imblearn.over_sampling import ADASYN<br>#Scaling<br>from sklearn.preprocessing import MinMaxScaler<br># Binary Classification Specific Metrics<br>from sklearn.metrics import RocCurveDisplay as plot_roc_curve<br># General Metrics<br>from sklearn.metrics import accuracy_score<br>from sklearn.metrics import precision_score<br>from sklearn.metrics import confusion_matrix, classification_report, roc_curve, roc_auc_score, accuracy_score<br>from sklearn.metrics import precision_score<br>from sklearn.metrics import ConfusionMatrixDisplay<br><br># Reporting<br>import matplotlib.pyplot as plt<br>from matplotlib.pylab import rcParams<br>from xgboost import plot_tree<br>#Backtesting<br>from backtesting import Backtest<br>from backtesting import Strategy<br>#hyperopt<br>from hyperopt import fmin, tpe, hp<br>from pandas_datareader.data import DataReader<br>import json<br>from datetime import datetime<br>import talib as ta<br>import ccxt<br># from sklearn.model_selection import train_test_split<br>from sklearn.utils import class_weight<br>from keras.models import Sequential<br>from keras.layers import LSTM, Dense, Dropout<br>from keras.optimizers import Adam<br># from keras.wrappers.scikit_learn import KerasClassifier<br>from sklearn.ensemble import VotingClassifier<br>from hyperopt import fmin, tpe, hp, STATUS_OK, Trials</pre><p><strong>Import Statements (Lines 1–18):</strong></p><ul><li><strong>Warnings (Lines 1–4):</strong></li><li>These lines suppress warnings that might appear during execution. While this can be helpful for uninterrupted training, it’s generally recommended to address the warnings themselves for better debugging and understanding potential issues.</li><li><strong>General Libraries (Lines 5–7):</strong></li><li>numpy (np): Provides numerical computing capabilities, often used for array operations and mathematical functions. Not directly applicable to article writing.</li><li>pandas (pd): Used for data manipulation, analysis, and visualization. Essential for working with structured data in articles (e.g., tables, charts).</li><li><strong>Machine Learning Libraries (Lines 8–13):</strong></li><li>catboost (not explicitly imported here): Provides a powerful gradient boosting library for machine learning tasks. Not directly relevant to article writing unless you&#39;re discussing specific machine learning algorithms.</li><li>scikit-learn (various submodules): A comprehensive machine learning library. Parts might be useful for illustrating concepts or comparing approaches in articles:</li><li>train_test_split: Splits data into training and testing sets for model evaluation.</li><li>RandomizedSearchCV, cross_val_score, RepeatedStratifiedKFold: Techniques for hyperparameter tuning and model evaluation (cross-validation).</li><li>LogisticRegression: A linear classification model. Potentially relevant if discussing classification algorithms.</li><li><strong>Ensemble Methods (Lines 14–16):</strong></li><li>scikit-learn (submodules): Techniques for combining multiple models to improve performance. Not directly applicable to article writing.</li><li><strong>Sampling Methods (Line 17):</strong></li><li>imblearn: Provides tools for handling imbalanced datasets (where classes have unequal sizes). Not typically used in article writing itself.</li><li><strong>Scaling (Line 18):</strong></li><li>scikit-learn: Techniques for normalizing or standardizing data (often necessary for machine learning models). Can be relevant in articles to explain data preprocessing steps.</li></ul><p><strong>Metrics (Lines 19–33):</strong></p><ul><li><strong>Binary Classification Metrics (Lines 19–21):</strong></li><li>scikit-learn: Used to evaluate the performance of classification models, particularly for binary classification (two classes). Not directly applicable to article writing unless discussing model evaluation metrics.</li><li><strong>General Metrics (Lines 22–33):</strong></li><li>scikit-learn: Various metrics for evaluating model performance across different classification tasks. Can be useful in articles to explain how models are assessed:</li><li>accuracy_score: Proportion of correct predictions.</li><li>precision_score: Proportion of true positives among predicted positives.</li><li>confusion_matrix: Visualization of how many instances were classified correctly or incorrectly for each class.</li><li>classification_report: Detailed report on model performance, including precision, recall, F1-score, and support for each class.</li><li>roc_curve, roc_auc_score: Measures for assessing the Receiver Operating Characteristic (ROC) curve, which helps evaluate a model&#39;s ability to discriminate between classes.</li></ul><p><strong>Reporting (Lines 34–36):</strong></p><ul><li>matplotlib.pyplot (plt): Used for creating visualizations like charts and graphs. Essential for presenting data and model results in articles.</li></ul><p><strong>Backtesting (Lines 37–38):</strong></p><ul><li>backtesting: Library for backtesting trading strategies. Not relevant to article writing unless discussing financial applications of machine learning.</li></ul><p><strong>Hyperparameter Optimization (Lines 39–42):</strong></p><ul><li>hyperopt: Library for hyperparameter tuning (finding the best settings for machine learning models). Not directly applicable to article writing.</li></ul><p><strong>Data Retrieval (Line 43):</strong></p><ul><li>pandas_datareader: Facilitates data retrieval from various financial data sources. Not typically used in article writing itself.</li></ul><p><strong>Other Imports (Lines 44–50):</strong></p><ul><li>json: For working with JSON data format (not directly used here).</li><li>datetime: For working with date and time objects. Can be useful in articles for handling time-series data.</li><li>talib: Technical analysis library for financial markets (not directly used here).</li><li>ccxt (not explicitly imported here): Library for interacting with cryptocurrency exchanges (not relevant to article writing).</li></ul><p><strong>Context:</strong></p><ul><li>Each library and module is imported with a specific purpose, such as data manipulation, machine learning, evaluation, visualization, backtesting, hyperparameter optimization, etc.</li><li>These libraries and modules will be used throughout the code for various tasks like data preprocessing, model training, evaluation, optimization, and visualization.</li></ul><pre># Define the path to your JSON file<br>file_path = &#39;./ETH_USDT_USDT-15m-futures.json&#39;<br># Open the file and read the data<br>with open(file_path, &quot;r&quot;) as f:<br>    data = json.load(f)<br><br>df = pd.DataFrame(data)<br># Extract the OHLC data (adjust column names as needed)<br># ohlc_data = df[[&quot;date&quot;,&quot;open&quot;, &quot;high&quot;, &quot;low&quot;, &quot;close&quot;, &quot;volume&quot;]]<br>df.rename(columns={0: &quot;Date&quot;, 1: &quot;Open&quot;, 2: &quot;High&quot;,3: &quot;Low&quot;, 4: &quot;Adj Close&quot;, 5: &quot;Volume&quot;}, inplace=True)<br># Convert timestamps to datetime objects<br>df[&quot;Date&quot;] = pd.to_datetime(df[&#39;Date&#39;] / 1000, unit=&#39;s&#39;)<br>df.set_index(&quot;Date&quot;, inplace=True)<br># Format the date index<br>df.index = df.index.strftime(&quot;%m-%d-%Y %H:%M&quot;)<br>df[&#39;Close&#39;] = df[&#39;Adj Close&#39;]<br># print(df.dropna(), df.describe(), df.info())<br>data = df<br>data</pre><p>To analyze historical cryptocurrency futures data, we can first load the data from a JSON file. The provided code demonstrates how to use Python’s json library to parse the JSON content into a dictionary. We then convert this dictionary into a pandas DataFrame for easier manipulation. The DataFrame is cleaned and transformed by renaming columns, converting timestamps to datetime objects, setting the date as the index, and formatting the date display for better readability.</p><p><strong>Here’s the step-by-step explanation of the code:</strong></p><p><strong>1. Loading JSON Data:</strong></p><ul><li>The code defines a file path (file_path) to a JSON file containing cryptocurrency data (presumably in the format of Open-High-Low-Close-Volume for Ethereum futures contracts traded with USDT).</li><li>It opens the file for reading (with open(file_path, &quot;r&quot;) as f:) and uses json.load(f) to parse the JSON content into a Python dictionary (data).</li></ul><p><strong>2. Converting to DataFrame:</strong></p><ul><li>The code creates a pandas DataFrame (df) from the loaded dictionary (data). A DataFrame is a tabular data structure similar to a spreadsheet, making it easier to work with and analyze the data.</li></ul><p><strong>3. Data Cleaning and Transformation:</strong></p><ul><li>This part assumes the JSON data has columns with numerical indices (0, 1, 2, etc.) instead of meaningful names. It renames these columns to more descriptive labels (&quot;Date&quot;, &quot;Open&quot;, &quot;High&quot;, &quot;Low&quot;, &quot;Adj Close&quot;, &quot;Volume&quot;) using df.rename(columns={...}, inplace=True).</li><li>It converts the &quot;Date&quot; column from timestamps (likely in milliseconds since some epoch) to datetime objects using pd.to_datetime(). This makes it easier to work with dates and perform time-based operations.</li><li>The code sets the &quot;Date&quot; column as the index of the DataFrame using df.set_index(&quot;Date&quot;, inplace=True). This allows you to efficiently access and filter data based on dates.</li><li>It formats the date index using df.index.strftime(&quot;%m-%d-%Y %H:%M&quot;) to display dates in a more readable format (e.g., &quot;05-14-2024 16:35&quot;).</li><li>Finally, it assigns the column named &quot;Adj Close&quot; (assuming it represents the adjusted closing price) to a variable named &quot;Close&quot; for potentially clearer reference.</li></ul><pre># Assuming you have a DataFrame named &#39;df&#39; with columns &#39;Open&#39;, &#39;High&#39;, &#39;Low&#39;, &#39;Close&#39;, &#39;Adj Close&#39;, and &#39;Volume&#39;<br>target_prediction_number = 2<br>time_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br>name_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br>df = data.copy()<br>new_columns = []<br>for period in time_periods:<br>    for nperiod in name_periods:<br>        df[f&#39;ATR_{period}&#39;] = ta.ATR(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], timeperiod=period)<br>        df[f&#39;EMA_{period}&#39;] = ta.EMA(df[&#39;Close&#39;], timeperiod=period*2)<br>        df[f&#39;RSI_{period}&#39;] = ta.RSI(df[&#39;Close&#39;], timeperiod=period*0.5)<br>        df[f&#39;VWAP_{period}&#39;] = ta.SUM(df[&#39;Volume&#39;] * (df[&#39;High&#39;] + df[&#39;Low&#39;] + df[&#39;Close&#39;]) / 3, timeperiod=period) / ta.SUM(df[&#39;Volume&#39;], timeperiod=period)<br>        df[f&#39;ROC_{period}&#39;] = ta.ROC(df[&#39;Close&#39;], timeperiod=period)<br>        df[f&#39;KC_upper_{period}&#39;] = ta.EMA(df[&#39;High&#39;], timeperiod=period*2)<br>        df[f&#39;KC_middle_{period}&#39;] = ta.EMA(df[&#39;Low&#39;], timeperiod=period*2)<br>        df[f&#39;Donchian_upper_{period}&#39;] = ta.MAX(df[&#39;High&#39;], timeperiod=period)<br>        df[f&#39;Donchian_lower_{period}&#39;] = ta.MIN(df[&#39;Low&#39;], timeperiod=period)<br>        macd, macd_signal, _ = ta.MACD(df[&#39;Close&#39;], fastperiod=(period + 12), slowperiod=(period + 26), signalperiod=(period + 9))<br>        df[f&#39;MACD_{period}&#39;] = macd<br>        df[f&#39;MACD_signal_{period}&#39;] = macd_signal<br>        bb_upper, bb_middle, bb_lower = ta.BBANDS(df[&#39;Close&#39;], timeperiod=period*0.5, nbdevup=2, nbdevdn=2)<br>        df[f&#39;BB_upper_{period}&#39;] = bb_upper<br>        df[f&#39;BB_middle_{period}&#39;] = bb_middle<br>        df[f&#39;BB_lower_{period}&#39;] = bb_lower<br>        df[f&#39;EWO_{period}&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=(period+5)) - ta.SMA(df[&#39;Close&#39;], timeperiod=(period+35))<br>        <br>    <br>df[&quot;Returns&quot;] = (df[&quot;Adj Close&quot;] / df[&quot;Adj Close&quot;].shift(target_prediction_number)) - 1<br>df[&quot;Range&quot;] = (df[&quot;High&quot;] / df[&quot;Low&quot;]) - 1<br>df[&quot;Volatility&quot;] = df[&#39;Returns&#39;].rolling(window=target_prediction_number).std()<br># Volume-Based Indicators<br>df[&#39;OBV&#39;] = ta.OBV(df[&#39;Close&#39;], df[&#39;Volume&#39;])<br>df[&#39;ADL&#39;] = ta.AD(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], df[&#39;Volume&#39;])<br><br># Momentum-Based Indicators<br>df[&#39;Stoch_Oscillator&#39;] = ta.STOCH(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;])[0]<br># Calculate the Elliott Wave Oscillator (EWO)<br>#df[&#39;EWO&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=5) - ta.SMA(df[&#39;Close&#39;], timeperiod=35)<br># Volatility-Based Indicators<br># df[&#39;ATR&#39;] = ta.ATR(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], timeperiod=14)<br># df[&#39;BB_upper&#39;], df[&#39;BB_middle&#39;], df[&#39;BB_lower&#39;] = ta.BBANDS(df[&#39;Close&#39;], timeperiod=20, nbdevup=2, nbdevdn=2)<br># df[&#39;KC_upper&#39;], df[&#39;KC_middle&#39;] = ta.EMA(df[&#39;High&#39;], timeperiod=20), ta.EMA(df[&#39;Low&#39;], timeperiod=20)<br># df[&#39;Donchian_upper&#39;], df[&#39;Donchian_lower&#39;] = ta.MAX(df[&#39;High&#39;], timeperiod=20), ta.MIN(df[&#39;Low&#39;], timeperiod=20)<br># Trend-Based Indicators<br># df[&#39;MA&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=20)<br># df[&#39;EMA&#39;] = ta.EMA(df[&#39;Close&#39;], timeperiod=20)<br>df[&#39;PSAR&#39;] = ta.SAR(df[&#39;High&#39;], df[&#39;Low&#39;], acceleration=0.02, maximum=0.2)<br># Set pandas option to display all columns<br>pd.set_option(&#39;display.max_columns&#39;, None)<br># Displaying the calculated indicators<br>print(df.tail())<br>df.dropna(inplace=True)<br>print(&quot;Length: &quot;, len(df))<br>df</pre><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>This code demonstrates the calculation of various technical indicators using the talib library. The code iterates through different time periods to compute indicators like Average True Range (ATR), Exponential Moving Average (EMA), Relative Strength Index (RSI), and several others. Additionally, it calculates features like returns, range, and volatility to potentially use as input features for machine learning models.</p><p><strong>1. Technical Indicator Calculations:</strong></p><ul><li>The code iterates through two lists, time_periods and name_periods (which seem to have the same values here). This might be a placeholder for using different sets of periods for the indicators in the future.</li><li>Within the loops, it calculates numerous technical indicators for each specified time period (period) using talib functions:</li><li><strong>Average True Range (ATR):</strong> Measures market volatility (df[f&#39;ATR_{period}&#39;]).</li><li><strong>Exponential Moving Average (EMA):</strong> Calculates EMAs with a period twice the loop’s period (df[f&#39;EMA_{period}&#39;]).</li><li><strong>Relative Strength Index (RSI):</strong> Calculates RSI with a period half the loop’s period (df[f&#39;RSI_{period}&#39;]).</li><li><strong>Volume-Weighted Average Price (VWAP):</strong> Calculates VWAP for the period (df[f&#39;VWAP_{period}&#39;]).</li><li><strong>Rate of Change (ROC):</strong> Calculates ROC for the period (df[f&#39;ROC_{period}&#39;]).</li><li><strong>Keltner Channels (KC):</strong> Calculates upper and middle bands based on EMAs of highs and lows (df[f&#39;KC_upper_{period}&#39;], df[f&#39;KC_middle_{period}&#39;]).</li><li><strong>Donchian Channels:</strong> Calculates upper and lower bands based on maximum and minimum highs/lows within the period (df[f&#39;Donchian_upper_{period}&#39;], df[f&#39;Donchian_lower_{period}&#39;]).</li><li><strong>Moving Average Convergence Divergence (MACD):</strong> Calculates MACD and its signal line for the period (df[f&#39;MACD_{period}&#39;], df[f&#39;MACD_signal_{period}&#39;]).</li><li><strong>Bollinger Bands (BB):</strong> Calculates upper, middle, and lower bands for the period (df[f&#39;BB_upper_{period}&#39;], df[f&#39;BB_middle_{period}&#39;], df[f&#39;BB_lower_{period}&#39;]).</li><li><strong>Elliott Wave Oscillator (EWO):</strong> Calculates EWO for the period (df[f&#39;EWO_{period}&#39;]).</li><li><strong>Target Prediction and Feature Engineering:</strong></li><li>The code defines a target_prediction_number (presumably the number of periods ahead you aim to predict).</li><li>It calculates “Returns” as the percentage change in adjusted close prices over the target_prediction_number periods (df[&quot;Returns&quot;]).</li><li>It calculates “Range” as the difference between high and low prices divided by the low price (df[&quot;Range&quot;]).</li><li>It calculates “Volatility” as the rolling standard deviation of returns over the target_prediction_number periods (df[&quot;Volatility&quot;]).</li><li><strong>Additional Indicators:</strong></li><li>The code calculates On-Balance Volume (OBV) and Accumulation Distribution Line (ADL) using talib functions (df[&#39;OBV&#39;], df[&#39;ADL&#39;]).</li><li>It calculates the Stochastic Oscillator using talib (df[&#39;Stoch_Oscillator&#39;]).</li><li>It calculates the Parabolic Stop and Reversal (PSAR) using talib (df[&#39;PSAR&#39;]).</li></ul><h3>Data- Preprocessing — Setting up “Target” value for estimating future predictive values</h3><pre># Target flexible way<br>pipdiff_percentage = 0.01  # 1% (0.01) of the asset&#39;s price for TP<br>SLTPRatio = 2.0  # pipdiff/Ratio gives SL<br>def mytarget(barsupfront, df1):<br>    length = len(df1)<br>    high = list(df1[&#39;High&#39;])<br>    low = list(df1[&#39;Low&#39;])<br>    close = list(df1[&#39;Close&#39;])<br>    open_ = list(df1[&#39;Open&#39;])  # Renamed &#39;open&#39; to &#39;open_&#39; to avoid conflict with Python&#39;s built-in function<br>    trendcat = [None] * length<br>    for line in range(0, length - barsupfront - 2):<br>        valueOpenLow = 0<br>        valueOpenHigh = 0<br>        for i in range(1, barsupfront + 2):<br>            value1 = open_[line + 1] - low[line + i]<br>            value2 = open_[line + 1] - high[line + i]<br>            valueOpenLow = max(value1, valueOpenLow)<br>            valueOpenHigh = min(value2, valueOpenHigh)<br>            if (valueOpenLow &gt;= close[line + 1] * pipdiff_percentage) and (<br>                    -valueOpenHigh &lt;= close[line + 1] * pipdiff_percentage / SLTPRatio):<br>                trendcat[line] = 2  # -1 downtrend<br>                break<br>            elif (valueOpenLow &lt;= close[line + 1] * pipdiff_percentage / SLTPRatio) and (<br>                    -valueOpenHigh &gt;= close[line + 1] * pipdiff_percentage):<br>                trendcat[line] = 1  # uptrend<br>                break<br>            else:<br>                trendcat[line] = 0  # no clear trend<br>return trendcat</pre><p>This code defines a function mytarget that attempts to identify potential trends and set target values accordingly. It calculates the difference between the open price and upcoming highs/lows within a specified timeframe (barsupfront). Based on these differences and thresholds defined by pipdiff_percentage and SLTPRatio, the function classifies the trend as uptrend, downtrend, or no clear trend. These classifications could then be used to set target buy/sell prices in a trading strategy.</p><p><strong>Here’s the breakdown of the code provided:</strong></p><p>The provided code defines a function mytarget that aims to set target values (presumably for buying and selling) based on a trend classification. Here&#39;s a breakdown of its functionality:</p><p><strong>Parameters:</strong></p><ul><li>barsupfront (integer): The number of bars to look ahead from the current bar for trend classification.</li><li>df1 (pandas DataFrame): The DataFrame containing OHLC (Open, High, Low, Close) prices.</li></ul><p><strong>Function Logic:</strong></p><ol><li><strong>Initialization:</strong></li></ol><ul><li>It retrieves the length of the DataFrame (length).</li><li>It extracts lists of high, low, close, and open prices (high, low, close, open_). Note that open is renamed to open_ to avoid conflicts with Python&#39;s built-in open function.</li><li>It initializes a list trendcat with length elements, all set to None, which will eventually hold the trend category (uptrend, downtrend, or no trend) for each bar.</li></ul><p><strong>2. Trend Classification Loop:</strong></p><ul><li>The code iterates through the DataFrame, starting from the barsupfront-th bar to the second-last bar (length - barsupfront - 2).</li><li>Inside the loop:</li><li>It calculates two values:</li><li>valueOpenLow: Maximum difference between the open price at the current bar and the low prices in the next barsupfront + 1 bars.</li><li>valueOpenHigh: Minimum difference between the open price at the current bar and the high prices in the next barsupfront + 1 bars.</li><li>It checks these values against thresholds based on pipdiff_percentage (a percentage of the asset&#39;s price) and SLTPRatio:</li><li>If valueOpenLow is greater than or equal to close[line + 1] * pipdiff_percentage (meaning the open is significantly lower than some of the upcoming lows) AND -valueOpenHigh is less than or equal to close[line + 1] * pipdiff_percentage / SLTPRatio (meaning the open is not significantly higher than some of the upcoming highs), it classifies the trend as downtrend (trendcat[line] is set to 2).</li><li>Conversely, if valueOpenLow is less than or equal to close[line + 1] * pipdiff_percentage / SLTPRatio (meaning the open is significantly higher than some of the upcoming lows) AND -valueOpenHigh is greater than or equal to close[line + 1] * pipdiff_percentage (meaning the open is not significantly lower than some of the upcoming highs), it classifies the trend as uptrend (trendcat[line] is set to 1).</li><li>If neither condition is met, it marks no clear trend (trendcat[line] remains 0).</li></ul><p><strong>3. Return:</strong></p><ul><li>The function returns the trendcat list containing the trend classification for each bar (except the first barsupfront bars).</li><li>pen_spark</li></ul><pre>#!!! pitfall one category high frequency<br>df[&#39;Target&#39;] = mytarget(2, df)<br>df[&#39;Target&#39;] = df[&#39;Target&#39;].shift(1)<br>#df.tail(20)<br>df.replace([np.inf, -np.inf], np.nan, inplace=True)<br>df.dropna(axis=0, inplace=True)<br># Convert columns to integer type<br>df = df.astype(int)<br>#df[&#39;Target&#39;] = df[&#39;Target&#39;].astype(int)<br>df[&#39;Target&#39;].hist()<br>count_of_twos_target = df[&#39;Target&#39;].value_counts().get(2, 0)<br>count_of_zeros_target = df[&#39;Target&#39;].value_counts().get(0, 0)<br>count_of_ones_target = df[&#39;Target&#39;].value_counts().get(1, 0)<br>percent_of_zeros_over_ones_and_twos = (100 - (count_of_zeros_target/ (count_of_zeros_target + count_of_ones_target + count_of_twos_target))*100)<br>print(f&#39; count_of_zeros = {count_of_zeros_target}\n count_of_twos_target = {count_of_twos_target}\n count_of_ones_target={count_of_ones_target}\n percent_of_zeros_over_ones_and_twos = {round(percent_of_zeros_over_ones_and_twos,2)}%&#39;)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/373/1*hnmezwvqGGgIlWUVFCut6Q.png" /><figcaption>output of the above code</figcaption></figure><p>After assigning trend classifications (Target) based on the mytarget function, the code performs data cleaning by handling infinities and removing rows with missing values. It then analyzes the distribution of target values using a histogram and calculates the proportion of bars classified as each trend category. This helps assess the balance between clear uptrends, downtrends, and periods with no clear trend in the data.</p><p><strong>1. Assigning Target Values and Shifting:</strong></p><ul><li>The code assigns the output of mytarget(2, df) (presumably trend classifications) to the &#39;Target&#39; column (df[&#39;Target&#39;] = mytarget(2, df)).</li><li>It then shifts the &#39;Target&#39; values by one position upwards (df[&#39;Target&#39;] = df[&#39;Target&#39;].shift(1)) because the trend classification is based on future price movements. This means the target value for bar n is based on the trend classification for bar n-1.</li></ul><p><strong>2. Handling Infinities and Missing Values:</strong></p><ul><li>The code replaces positive and negative infinity (np.inf and -np.inf) with NaN (Not a Number) values in the DataFrame (df.replace([np.inf, -np.inf], np.nan, inplace=True)). This is necessary because some mathematical operations cannot handle infinities.</li><li>It then removes rows with missing values (NaN) from the DataFrame (df.dropna(axis=0, inplace=True)) to ensure clean data for further analysis.</li></ul><p><strong>3. Converting Data Types (Commented Out):</strong></p><ul><li>The line df = df.astype(int) is commented out. This line would attempt to convert all columns in the DataFrame to integers. However, since the &#39;Target&#39; column likely contains categorical values (1, 2, or 0), converting it to integer might not be meaningful. You&#39;d typically only convert numerical columns to integers if necessary for calculations.</li></ul><p><strong>4. Analyzing Target Distribution:</strong></p><ul><li>The code plots a histogram of the &#39;Target&#39; column (df[&#39;Target&#39;].hist()). This helps visualize the distribution of target values (uptrend, downtrend, or no trend) across the data.</li><li>It then calculates the counts of each target value (1, 2, and 0) using value_counts().</li><li>Finally, it calculates the percentage of bars classified as “no trend” relative to the sum of bars classified as uptrend and downtrend (percent_of_zeros_over_ones_and_twos). This provides insights into the balance between clear trends and unclear trends in the data.</li></ul><p>This code segment effectively calculates target categories based on predefined criteria and provides insights into the distribution of these categories within the dataset.</p><h3>Checking if the above Code is Giving Best Possible Returns for the “Target” Data Created:</h3><pre># Check for NaN values:<br>has_nan = df[&#39;Target&#39;].isnull().values.any()<br>print(&quot;NaN values present:&quot;, has_nan)<br># Check for infinite values:<br>has_inf = df[&#39;Target&#39;].isin([np.inf, -np.inf]).values.any()<br>print(&quot;Infinite values present:&quot;, has_inf)<br># Count the number of NaN and infinite values:<br>nan_count = df[&#39;Target&#39;].isnull().sum()<br>inf_count = (df[&#39;Target&#39;] == np.inf).sum() + (df[&#39;Target&#39;] == -np.inf).sum()<br>print(&quot;Number of NaN values:&quot;, nan_count)<br>print(&quot;Number of infinite values:&quot;, inf_count)<br># Get the indices of NaN and infinite values:<br>nan_indices = df[&#39;Target&#39;].index[df[&#39;Target&#39;].isnull()]<br>inf_indices = df[&#39;Target&#39;].index[df[&#39;Target&#39;].isin([np.inf, -np.inf])]<br>print(&quot;Indices of NaN values:&quot;, nan_indices)<br>df[&#39;Target&#39;]<br>df = df.reset_index(inplace=False)<br>df[&#39;Date&#39;] = pd.to_datetime(df[&#39;Date&#39;])<br>df.set_index(&#39;Date&#39;, inplace=True)<br>def SIGNAL(df):<br>    return df[&#39;Target&#39;]<br>from backtesting import Strategy<br>class MyCandlesStrat(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1 = self.I(SIGNAL, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        if self.signal1 == 1:<br>            sl_pct = 0.025  # 2.5% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif self.signal1 == 2:<br>            sl_pct = 0.025  # 2.5% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)<br>            <br>bt = Backtest(df, MyCandlesStrat, cash=100000, commission=.001, exclusive_orders = True)<br>stat = bt.run()<br>stat</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/368/1*0kb5rR7CnLf7tani_mNMLQ.png" /><figcaption>output of above code</figcaption></figure><ol><li><strong>Checking for Missing and Infinite Values:</strong></li></ol><ul><li>The code checks for the presence of NaN (Not a Number) and infinite values in the &#39;Target&#39; column (df[&#39;Target&#39;]).</li><li>It then counts the number of occurrences and retrieves the indices of these values.</li><li>These checks are crucial because backtesting libraries typically cannot handle missing or infinite values in signals.</li></ul><p><strong>2. Backtesting Framework Setup:</strong></p><ul><li>The code defines a function SIGNAL(df) that simply returns the &#39;Target&#39; column values. This function essentially provides the buy/sell signals based on the target classifications (1 for uptrend buy, 2 for downtrend sell).</li><li>It imports the Strategy class from the backtesting library.</li><li>It defines a custom strategy class MyCandlesStrat that inherits from Strategy.</li><li>The init method initializes an indicator named signal1 that holds the target values using the I function (presumably from backtesting).</li><li>The next method defines the trading logic:</li><li>If the signal1 is 1 (uptrend), it places a buy order with a stop-loss and take-profit based on percentages of the closing price.</li><li>If the signal1 is 2 (downtrend), it places a sell order with a stop-loss and take-profit based on percentages of the closing price.</li></ul><p><strong>3. Backtesting and Evaluation:</strong></p><ul><li>The code creates a Backtest object using the backtesting library. It provides the DataFrame (df), the strategy class (MyCandlesStrat), initial capital (cash), commission rate (commission), and sets exclusive_orders to True (potentially to prevent overlapping orders).</li><li>It runs the backtest using the bt.run() method and stores the results in the stat variable.</li></ul><p><strong>Does this code definitively determine the effectiveness of the target values?</strong></p><p>No, this code doesn’t definitively determine the effectiveness of the target values. Here’s why:</p><ul><li><strong>Parameter Optimization:</strong> The stop-loss and take-profit percentages (sl_pct and tp_pct) are fixed in the code. Optimizing these parameters for the specific strategy and market conditions could potentially improve performance.</li><li><strong>Single Backtest Run:</strong> Running the backtest only once doesn’t account for the inherent randomness in financial markets. Ideally, you’d run the backtest multiple times with different random seeds to assess its robustness.</li></ul><p><strong>How to improve the code for target evaluation?</strong></p><ul><li><strong>Calculate Performance Metrics:</strong> Modify the code to calculate and print relevant performance metrics like Sharpe Ratio, drawdown, and total profit after the backtest run.</li><li><strong>Optimize Stop-Loss and Take-Profit:</strong> Implement a parameter optimization process to find the best stop-loss and take-profit values for the strategy using the target signals.</li><li><strong>Multiple Backtest Runs:</strong> Run the backtest with different random seeds (e.g., using a loop) and analyze the distribution of performance metrics to assess the strategy’s consistency.</li></ul><p>By incorporating these improvements, wecan gain a more comprehensive understanding of how well the target values from the mytarget function perform in a backtesting framework. Remember, backtesting results are not guarantees of future performance, so real-world testing with a smaller capital allocation is essential before deploying a strategy with real money.</p><h3>Scaling and splitting the dataframe for training and testing:</h3><pre>scaler = MinMaxScaler(feature_range=(0,1))<br>df_model = df.copy()<br># Split into Learning (X) and Target (y) Data<br>X = df_model.iloc[:, : -1]<br>y = df_model.iloc[:, -1]<br>X_scaled = scaler.fit_transform(X)<br># Define a function to reshape the data<br>def reshape_data(data, time_steps):<br>    samples = len(data) - time_steps + 1<br>    reshaped_data = np.zeros((samples, time_steps, data.shape[1]))<br>    for i in range(samples):<br>        reshaped_data[i] = data[i:i + time_steps]<br>    return reshaped_data<br># Reshape the scaled X data<br>time_steps = 1  # Adjust the number of time steps as needed<br>X_reshaped = reshape_data(X_scaled, time_steps)<br># Now X_reshaped has the desired three-dimensional shape: (samples, time_steps, features)<br># Each sample contains scaled data for a specific time window<br># Align y with X_reshaped by discarding excess target values<br>y_aligned = y[time_steps - 1:]  # Discard the first (time_steps - 1) target values<br>X = X_reshaped<br>y = y_aligned<br>print(len(X),len(y))<br># Split data into train and test sets (considering time series data)<br>X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=False)</pre><p><strong>1. Data Preparation:</strong></p><ul><li><strong>Copying Data:</strong> It creates a copy of the original DataFrame (df_model = df.copy()) to avoid modifying the original data.</li></ul><p><strong>2. Splitting Features and Target:</strong></p><ul><li><strong>Separating Features (X) and Target (y):</strong> It separates the features (all columns except the last) and the target variable (the last column) using slicing (X = df_model.iloc[:, : -1], y = df_model.iloc[:, -1]).</li></ul><p><strong>3. Scaling Features:</strong></p><ul><li><strong>MinMaxScaler:</strong> It creates a MinMaxScaler object to scale the features between 0 and 1 (scaler = MinMaxScaler(feature_range=(0,1))). This can be helpful for some machine learning algorithms that work better with normalized data.</li><li><strong>Scaling X:</strong> It scales the feature data (X) using the fit_transform method of the scaler (X_scaled = scaler.fit_transform(X)).</li></ul><p><strong>4. Reshaping Data (Windowing):</strong></p><ul><li><strong>Reshape Function:</strong> It defines a function reshape_data that takes the data and the number of time steps (time_steps) as input.</li><li>This function iterates through the data with a sliding window of time_steps and creates a new 3D array (reshaped_data).</li><li>Each element in the new array represents a sample, containing a sequence of time_steps data points for each feature.</li><li><strong>Reshaping Scaled X:</strong> It defines the number of time steps (time_steps) and reshapes the scaled feature data (X_scaled) using the reshape_data function (X_reshaped = reshape_data(X_scaled, time_steps)).</li><li>This step transforms the data into a format suitable for time series forecasting models that require sequences of past observations to predict future values.</li></ul><p><strong>5. Aligning Target with Reshaped Data:</strong></p><ul><li><strong>Discarding Excess Target Values:</strong> Since the reshaped data (X_reshaped) considers a window of time_steps, the corresponding target values need an adjustment. It discards the first time_steps - 1 target values from y to align with the reshaped data (y_aligned = y[time_steps - 1:]).</li></ul><p><strong>6. Final Splitting (Train-Test):</strong></p><ul><li><strong>Train-Test Split:</strong> It splits the reshaped features (X) and aligned target (y) into training and testing sets using train_test_split from scikit-learn (X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=False)).</li><li>It sets test_size=0.3 to allocate 30% of the data for testing and shuffle=False because shuffling data in time series can disrupt the temporal order.</li></ul><p><strong>Overall, this code effectively addresses key aspects of data preparation for time series forecasting models:</strong></p><ul><li>Scaling features to a common range can improve model performance for some algorithms.</li><li>Reshaping data into a 3D structure with time steps allows models to learn from sequences of past observations.</li><li>Aligning the target variable with the reshaped data ensures the model predicts for the correct time steps.</li><li>Splitting data into training and testing sets with shuffle=False preserves the temporal order for time series forecasting.</li></ul><p><strong>Additional Considerations:</strong></p><ul><li>The choice of scaler (MinMaxScaler, StandardScaler, etc.) might depend on the specific model and data characteristics.</li><li>You might explore different window sizes (time_steps) to see how they affect model performance.</li><li>Techniques like stationarity checks and differencing might be necessary for certain time series data before applying these steps.</li></ul><h3>2D CNN Model Manual Optimization</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*AtZN0ztGJ2XwjbTd.png" /><figcaption>CNN Classifier for crypto algo trading</figcaption></figure><pre>from sklearn.utils.class_weight import compute_class_weight<br>from keras import backend as K<br><br><br>def build_model(hp):<br>    inputs = Input(shape=(X_train.shape[1], X_train.shape[2], 1))<br>    x = Conv2D(hp.Int(&#39;conv_units&#39;, min_value=16, max_value=64, step=16), (5, 5), padding=&#39;same&#39;, activation=&#39;relu&#39;)(inputs)<br>    x = MaxPooling2D(pool_size=(2, 2), padding=&#39;same&#39;)(x)<br>    x = Dense(units=hp.Int(&#39;dense_units&#39;, min_value=100, max_value=300, step=50), activation=&#39;relu&#39;)(x)<br>    x = Dropout(hp.Float(&#39;dropout_rate&#39;, min_value=0.1, max_value=0.5, step=0.1))(x)<br>    x = Flatten()(x)<br>    x = Dense(units=hp.Int(&#39;dense_units&#39;, min_value=100, max_value=300, step=50), activation=&#39;relu&#39;)(x)<br>    outputs = Dense(3, activation=&#39;softmax&#39;)(x)<br>    model = Model(inputs=inputs, outputs=outputs)<br>    optimizer = Adam(learning_rate=hp.Choice(&#39;learning_rate&#39;, values=[1e-2, 1e-3, 1e-4]))<br>    model.compile(optimizer=optimizer, loss=&#39;categorical_crossentropy&#39;, metrics=[&#39;accuracy&#39;, Precision(), Recall()])<br>    return model<br><br>tuner = kt.Hyperband(<br>    build_model,<br>    objective=kt.Objective(&quot;val_recall&quot;, direction=&quot;max&quot;),<br>    max_epochs=20,<br>    factor=3,<br>    directory=&#39;my_dir&#39;,<br>    project_name=&#39;hyperopt_cnn&#39;<br>)<br><br>tuner.search(X_train, y_train_one_hot, epochs=10, validation_split=0.2)<br><br># Get the best hyperparameters<br>best_hps = tuner.get_best_hyperparameters(num_trials=1)[0]<br><br># Calculate class weights to handle class imbalance<br>class_weights = compute_class_weight(&#39;balanced&#39;, classes=np.unique(y_train), y=y_train)<br>class_weight_dict = dict(enumerate(class_weights))<br><br># Build the model with the best hyperparameters<br>best_cnn_model = tuner.hypermodel.build(best_hps)<br><br># Fit the model to the training data with best hyperparameters<br># best_cnn_model.fit(X_train, y_train_one_hot, epochs=100, batch_size=24, validation_split=0.2, verbose=1, class_weight=class_weight_dict)<br>best_cnn_model.fit(X_train, y_train_one_hot, epochs=100, batch_size=18, validation_split=0.2, verbose=1)<br><br></pre><p>This code defines and trains a 2D CNN-based model for classifying ETH price movements into three categories: neutral (0), long (1), and short (2). Here’s a breakdown:</p><h4>Importing Libraries</h4><pre>from sklearn.utils.class_weight import compute_class_weight<br>from keras import backend as K<br>import keras_tuner as kt<br>from keras.models import Model<br>from keras.layers import Input, Conv2D, MaxPooling2D, Dense, Dropout, Flatten<br>from keras.optimizers import Adam<br>from keras.metrics import Precision, Recall<br>import numpy as np</pre><ul><li><strong>compute_class_weight</strong>: Computes weights for classes to handle imbalanced datasets.</li><li><strong>keras.backend</strong>: Provides backend functions for operations.</li><li><strong>keras_tuner</strong>: Helps in hyperparameter tuning of models.</li><li><strong>Model</strong>: Used to create a Keras model.</li><li><strong>Layers</strong>: Various layers used to build the CNN model.</li><li><strong>Adam</strong>: The optimizer used to compile the model.</li><li><strong>Metrics</strong>: Precision and Recall metrics used to evaluate the model.</li></ul><h4>Building the Model</h4><pre>def build_model(hp):<br>    inputs = Input(shape=(X_train.shape[1], X_train.shape[2], 1))<br>    x = Conv2D(hp.Int(&#39;conv_units&#39;, min_value=16, max_value=64, step=16), (5, 5), padding=&#39;same&#39;, activation=&#39;relu&#39;)(inputs)<br>    x = MaxPooling2D(pool_size=(2, 2), padding=&#39;same&#39;)(x)<br>    x = Dense(units=hp.Int(&#39;dense_units&#39;, min_value=100, max_value=300, step=50), activation=&#39;relu&#39;)(x)<br>    x = Dropout(hp.Float(&#39;dropout_rate&#39;, min_value=0.1, max_value=0.5, step=0.1))(x)<br>    x = Flatten()(x)<br>    x = Dense(units=hp.Int(&#39;dense_units&#39;, min_value=100, max_value=300, step=50), activation=&#39;relu&#39;)(x)<br>    outputs = Dense(3, activation=&#39;softmax&#39;)(x)<br>    model = Model(inputs=inputs, outputs=outputs)<br>    optimizer = Adam(learning_rate=hp.Choice(&#39;learning_rate&#39;, values=[1e-2, 1e-3, 1e-4]))<br>    model.compile(optimizer=optimizer, loss=&#39;categorical_crossentropy&#39;, metrics=[&#39;accuracy&#39;, Precision(), Recall()])<br>    return model</pre><ul><li><strong>inputs</strong>: Input layer accepting data with shape (number_of_timesteps, number_of_features, 1).</li><li><strong>Conv2D</strong>: Convolutional layer with filters determined by hyperparameters (hp.Int). It extracts local patterns.</li><li><strong>MaxPooling2D</strong>: Pooling layer to reduce the spatial dimensions.</li><li><strong>Dense</strong>: Fully connected layers to interpret the extracted features.</li><li><strong>Dropout</strong>: Regularization technique to prevent overfitting.</li><li><strong>Flatten</strong>: Converts the 2D matrix to a 1D vector for the dense layers.</li><li><strong>outputs</strong>: Final output layer with a softmax activation function for multi-class classification.</li></ul><h4>Hyperparameter Tuning</h4><pre>tuner = kt.Hyperband(<br>    build_model,<br>    objective=kt.Objective(&quot;val_recall&quot;, direction=&quot;max&quot;),<br>    max_epochs=20,<br>    factor=3,<br>    directory=&#39;my_dir&#39;,<br>    project_name=&#39;hyperopt_cnn&#39;<br>)</pre><ul><li><strong>Hyperband</strong>: A Keras Tuner algorithm for hyperparameter tuning.</li><li><strong>objective</strong>: The metric to optimize during tuning (validation recall in this case).</li><li><strong>max_epochs</strong>: Maximum number of epochs to train.</li><li><strong>factor</strong>: Reduction factor for early stopping.</li><li><strong>directory/project_name</strong>: Where to save the tuning results.</li></ul><h4>Running the Tuner</h4><pre>tuner.search(X_train, y_train_one_hot, epochs=10, validation_split=0.2)</pre><ul><li><strong>search</strong>: Runs the hyperparameter tuning process on the training data.</li></ul><h4>Get Best Hyperparameters</h4><pre>best_hps = tuner.get_best_hyperparameters(num_trials=1)[0]</pre><ul><li>Retrieves the best hyperparameters identified during tuning.</li></ul><h4>Compute Class Weights</h4><pre>class_weights = compute_class_weight(&#39;balanced&#39;, classes=np.unique(y_train), y=y_train)<br>class_weight_dict = dict(enumerate(class_weights))</pre><ul><li><strong>compute_class_weight</strong>: Balances the classes by computing weights inversely proportional to their frequencies.</li><li><strong>class_weight_dict</strong>: Dictionary of class weights.</li></ul><h4>Build and Train the Model with Best Hyperparameters</h4><pre>best_cnn_model = tuner.hypermodel.build(best_hps)<br>best_cnn_model.fit(X_train, y_train_one_hot, epochs=100, batch_size=18, validation_split=0.2, verbose=1)</pre><ul><li><strong>build</strong>: Builds the model with the best hyperparameters.</li><li><strong>fit</strong>: Trains the model on the training data.</li></ul><h3>Why 2D CNN Might be Better than 1D CNN for This Task</h3><p><strong>Feature Interactions</strong>:</p><ul><li><strong>2D CNN</strong>: Can capture interactions between different features over time. This is crucial for time series data with multiple features, as it can learn complex patterns.</li><li><strong>1D CNN</strong>: Typically captures temporal patterns within a single feature or one-dimensional sequence of data.</li></ul><p><strong>Spatial Relationships</strong>:</p><ul><li><strong>2D CNN</strong>: More effective in understanding spatial relationships between features, which can be valuable when multiple related features are present.</li><li><strong>1D CNN</strong>: Focuses on one-dimensional sequences, which may not capture interactions between multiple features as effectively.</li></ul><p><strong>Dimensionality Reduction</strong>:</p><ul><li><strong>2D CNN</strong>: Pooling layers can reduce the dimensions more efficiently by considering spatial context, leading to better generalization.</li><li><strong>1D CNN</strong>: May require more layers or parameters to achieve similar dimensionality reduction, potentially increasing complexity.</li></ul><p><strong>Pattern Recognition</strong>:</p><ul><li><strong>2D CNN</strong>: Can detect complex patterns by applying filters across two dimensions (time and features), which is beneficial for multi-feature time series data.</li><li><strong>1D CNN</strong>: Limited to recognizing patterns in one dimension, which may not be sufficient for complex multi-feature datasets.</li></ul><h4>Conclusion</h4><p>The given code sets up a 2D CNN model with hyperparameter tuning using Keras Tuner. The model is designed to handle multi-class classification for time series data of cryptocurrencies. By leveraging 2D CNNs, the model can capture intricate patterns across both time and features, potentially leading to more accurate and robust predictions compared to 1D CNNs. The use of hyperparameter tuning ensures that the model is optimized for the given task, further enhancing its performance.</p><p><strong>Additional Notes:</strong></p><ul><li>The provided code might require adjustments based on your specific data and desired performance. Hyperparameter tuning (e.g., number of units, dropout rate, learning rate) is crucial for optimizing the model.</li><li>Consider using techniques like normalization or standardization for your features to improve model performance.</li></ul><p><strong>Further Exploration:</strong></p><ul><li>Experiment with different hyperparameters (number of layers, units, attention heads) to find the best configuration for your data and task.</li><li>Consider incorporating additional features like technical indicators or fundamental data points to potentially improve the model’s prediction accuracy.</li><li>Evaluate the model’s performance using various metrics like precision, recall, F1-score, or a custom metric based on your specific trading strategy.</li></ul><p><strong>Real-World Considerations:</strong></p><ul><li>Financial markets are complex and influenced by various factors. Past price movements don’t guarantee future performance.</li><li>Use the model predictions as a guide, not a definitive signal. Consider risk management strategies and other factors before making trading decisions.</li><li>Backtest your model on historical data to assess its performance in different market conditions.</li></ul><pre>import numpy as np<br>import matplotlib.pyplot as plt<br>from sklearn.metrics import classification_report, confusion_matrix<br>import seaborn as sns<br><br># # Reshape X_train and X_test back to their original shapes<br># X_train_original_shape = X_train.reshape(X_train.shape[0], -1)<br># X_test_original_shape = X_test.reshape(X_test.shape[0], -1)<br><br># X_test_reshaped = X_test_original_shape.reshape(-1, 1, X_test_original_shape.shape[1])<br><br><br># Now X_train_original_shape and X_test_original_shape have their original shapes<br><br># Perform prediction on the original shape data<br># y_pred = model.predict(X_test_reshaped)<br>y_pred = best_cnn_model.predict(X_test)<br><br><br># Perform any necessary post-processing on y_pred if needed<br># For example, if your model outputs probabilities, you might convert them to class labels using argmax:<br><br>y_pred_classes = np.argmax(y_pred, axis=1)<br><br># Convert one-hot encoded y_test to class labels<br>y_test_classes = y_test<br><br># Plot confusion matrix for test data<br>conf_matrix_test = confusion_matrix(y_test_classes, y_pred_classes)<br><br># Plot confusion matrix<br>plt.figure(figsize=(8, 6))<br>sns.heatmap(conf_matrix_test, annot=True, cmap=&#39;Blues&#39;, fmt=&#39;g&#39;, cbar=False)<br>plt.xlabel(&#39;Predicted labels&#39;)<br>plt.ylabel(&#39;True labels&#39;)<br>plt.title(&#39;Confusion Matrix - Test Data&#39;)<br>plt.show()<br><br><br><br># Compute classification report<br>report = classification_report(y_test_classes, y_pred_classes)<br>print(&quot;Classification Report:\n&quot;, report)<br><br>print(&quot;Confusion Matrix for Hyperopt Model:&quot;)<br>print(confusion_matrix(y_test_classes, y_pred_classes))<br><br></pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/523/1*CU1csughnkzSxqzFwPFDUw.png" /></figure><p><strong>1. Imports:</strong></p><ul><li>confusion_matrix from sklearn.metrics for calculating the confusion matrix.</li><li>matplotlib.pyplot (plt) and seaborn (sns) for creating the confusion matrix visualization.</li><li>classification_report from sklearn.metrics for generating a classification report.</li></ul><p><strong>2. Reshaping Data (Commented Out):</strong></p><ul><li>The commented section addresses potential reshaping issues. It’s important to ensure your test data (X_test) has the correct shape expected by the model for prediction.</li></ul><p><strong>3. Prediction:</strong></p><ul><li>y_pred = model_cnn.predict(X_test) performs predictions on the test data using your trained model.</li></ul><p><strong>4. Post-processing Predictions:</strong></p><ul><li>y_pred_classes = np.argmax(y_pred, axis=2) assumes your model outputs probabilities for each class (neutral, long, short). This line converts the probabilities to class labels by using argmax (finding the index of the maximum value) along axis 2.</li></ul><p><strong>5. Converting True Labels:</strong></p><ul><li>y_test_classes = y_test assumes your y_test data already contains class labels (0, 1, 2) for the test set.</li></ul><p><strong>6. Confusion Matrix:</strong></p><ul><li>conf_matrix_test = confusion_matrix(y_test_classes, y_pred_classes) calculates the confusion matrix for the test data. It shows how many samples from each true class were predicted into each class by the model.</li></ul><p><strong>7. Visualization:</strong></p><ul><li>The code creates a heatmap visualization of the confusion matrix using seaborn. This allows you to visually inspect how well the model classified each class. Ideally, you want to see high values on the diagonal, indicating correct classifications.</li></ul><p><strong>8. Classification Report:</strong></p><ul><li>class_report = classification_report(y_test, y_pred_classes) generates a classification report for the test data. This report provides metrics like precision, recall, F1-score, and support for each class, offering a more detailed breakdown of the model&#39;s performance.</li><li>pen_spark</li></ul><h3>Backtest with Test and Whole Data:</h3><pre>df_ens_test = df.copy() <br><br>df_ens = df_ens_test[len(X_train):]<br><br>df_ens[&#39;best_cnn_model_scaled&#39;] =  np.argmax(best_cnn_model.predict(X_test), axis=1)<br><br>df_ens[&#39;bcns&#39;] = df_ens[&#39;best_cnn_model_scaled&#39;].shift(1).dropna().astype(int)<br><br>df_ens = df_ens.dropna()<br><br>df_ens[&#39;bcns&#39;]<br><br># df_ens = df.copy() <br><br># # df_ens = df_ens_test[len(X_train):]<br><br># df_ens[&#39;best_cnn_model_scaled&#39;] =  np.argmax(best_cnn_model.predict(X), axis=1)<br><br># df_ens[&#39;bcns&#39;] = df_ens[&#39;best_cnn_model_scaled&#39;].shift(-1).dropna().astype(int)<br><br># df_ens = df_ens.dropna()<br><br># df_ens[&#39;bcns&#39;]<br><br>df_ens = df_ens.reset_index(inplace=False)<br>df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>df_ens.set_index(&#39;Date&#39;, inplace=True)<br><br>def SIGNAL_2_6(df_ens):<br>    return df_ens[&#39;bcns&#39;]<br><br>class MyCandlesStrat_2_6(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1_1 = self.I(SIGNAL_2_6, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        if self.signal1_1 == 1:<br>            sl_pct = 0.055  # 10% stop-loss<br>            tp_pct = 0.055  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif self.signal1_1 == 2:<br>            sl_pct = 0.055  # 10% stop-loss<br>            tp_pct = 0.055  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)<br><br>            <br>bt_2_6 = Backtest(df_ens, MyCandlesStrat_2_6, cash=100000, commission=.001)<br>stat_2_6 = bt_2_6.run()<br>stat_2_6</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*jgiwkNEdqohNOZXY1tg5HQ.png" /><figcaption>backtest results of 2d CNN neural network for classification of crypto asset</figcaption></figure><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><h4>Data Preparation</h4><pre>df_ens_test = df.copy() <br>df_ens = df_ens_test[len(X_train):]<br>df_ens[&#39;best_cnn_model_scaled&#39;] = np.argmax(best_cnn_model.predict(X_test), axis=1)<br>df_ens[&#39;bcns&#39;] = df_ens[&#39;best_cnn_model_scaled&#39;].shift(1).dropna().astype(int)<br>df_ens = df_ens.dropna()</pre><ul><li><strong>df_ens_test = df.copy()</strong>: Creates a copy of the original dataframe df to ensure the original data is not altered.</li><li><strong>df_ens = df_ens_test[len(X_train):]</strong>: Selects the portion of the dataframe corresponding to the test set by slicing off the length of the training data.</li><li><strong>df_ens[‘best_cnn_model_scaled’] = np.argmax(best_cnn_model.predict(X_test), axis=1)</strong>: Uses the trained CNN model to predict class labels on the test set (X_test). The np.argmax function is used to get the class with the highest probability for each prediction.</li><li><strong>df_ens[‘bcns’] = df_ens[‘best_cnn_model_scaled’].shift(1).dropna().astype(int)</strong>: Shifts the predicted class labels by 1 position to avoid look-ahead bias (using future data to make predictions for the current step). Drops any resulting NaN values and converts the series to integers.</li><li><strong>df_ens = df_ens.dropna()</strong>: Ensures no NaN values are left in the dataframe.</li></ul><h4>Indexing and Date Formatting</h4><pre>df_ens = df_ens.reset_index(inplace=False)<br>df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>df_ens.set_index(&#39;Date&#39;, inplace=True)</pre><ul><li><strong>df_ens = df_ens.reset_index(inplace=False)</strong>: Resets the index of the dataframe without modifying it in place.</li><li><strong>df_ens[‘Date’] = pd.to_datetime(df_ens[‘Date’])</strong>: Converts the ‘Date’ column to datetime format.</li><li><strong>df_ens.set_index(‘Date’, inplace=True)</strong>: Sets the ‘Date’ column as the index of the dataframe.</li></ul><h4>Signal Function</h4><pre>def SIGNAL_2_6(df_ens):<br>    return df_ens[&#39;bcns&#39;]</pre><ul><li><strong>SIGNAL_2_6(df_ens)</strong>: A function that returns the ‘bcns’ column, which contains the shifted class labels (signals).</li></ul><h4>Custom Trading Strategy</h4><pre>class MyCandlesStrat_2_6(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1_1 = self.I(SIGNAL_2_6, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        if self.signal1_1 == 1:<br>            sl_pct = 0.055  # 5.5% stop-loss<br>            tp_pct = 0.055  # 5.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif self.signal1_1 == 2:<br>            sl_pct = 0.055  # 5.5% stop-loss<br>            tp_pct = 0.055  # 5.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)</pre><ul><li><strong>class MyCandlesStrat_2_6(Strategy)</strong>: Defines a custom trading strategy class inheriting from Strategy.</li><li><strong>def init(self)</strong>: Initialization method. The self.signal1_1 attribute is set to the signal function SIGNAL_2_6, which provides the trading signals.</li><li><strong>def next(self)</strong>: The core logic of the strategy executed at each step.</li><li><strong>if self.signal1_1 == 1</strong>: If the signal is 1, a long position is taken with a 5.5% stop-loss and take-profit.</li><li><strong>elif self.signal1_1 == 2</strong>: If the signal is 2, a short position is taken with a 5.5% stop-loss and take-profit.</li></ul><h4>Backtesting the Strategy</h4><pre>bt_2_6 = Backtest(df_ens, MyCandlesStrat_2_6, cash=100000, commission=.001)<br>stat_2_6 = bt_2_6.run()<br>stat_2_6</pre><ul><li><strong>bt_2_6 = Backtest(df_ens, MyCandlesStrat_2_6, cash=100000, commission=.001)</strong>: Initializes a backtest with the prepared dataframe df_ens, the custom strategy MyCandlesStrat_2_6, an initial cash balance of 100,000 units, and a commission rate of 0.1%.</li><li><strong>stat_2_6 = bt_2_6.run()</strong>: Runs the backtest.</li><li><strong>stat_2_6</strong>: Outputs the results and statistics of the backtest.</li></ul><h3>Advantages of 2D CNN Over 1D CNN</h3><p><strong>Capturing Complex Patterns</strong>:</p><ul><li><strong>2D CNN</strong>: Can capture spatial relationships and interactions between different features across time, which is essential for multivariate time series data.</li><li><strong>1D CNN</strong>: Typically focuses on temporal patterns within a single feature or one-dimensional sequence.</li></ul><p><strong>Dimensionality Reduction</strong>:</p><ul><li><strong>2D CNN</strong>: Efficiently reduces dimensionality while preserving important features through pooling layers.</li><li><strong>1D CNN</strong>: May require more layers or parameters to achieve similar results.</li></ul><p><strong>Feature Interactions</strong>:</p><ul><li><strong>2D CNN</strong>: Can learn complex interactions between different features, providing a richer representation of the data.</li><li><strong>1D CNN</strong>: Limited to learning patterns in one dimension, which may not capture the full complexity of multivariate data.</li></ul><p>By using a 2D CNN, this approach leverages its ability to capture intricate patterns and interactions in multivariate time series data, potentially leading to better performance in predicting trading signals.</p><pre>from keras.models import save_model<br><br>best_cnn_model.save(f&quot;./models/best_cnn_model_2d_15m_ETH_SL55_TP55_ShRa_{round(stat_2_6[&#39;Sharpe Ratio&#39;],2)}_time_{time.strftime(&#39;%Y%m%d%H%M%S&#39;)}.keras&quot;)</pre><p><strong>Explanation:</strong></p><ol><li><strong>Import:</strong></li></ol><ul><li>save_model from keras.models is used to save the model.</li></ul><p><strong>2. Filename Definition:</strong></p><ul><li>The filename is constructed using an f-string (formatted string literal). It incorporates various details:</li><li>Path: ./models/: This specifies the directory where you want to save the model.</li><li>Model Name: transformer_model: Base name for the model.</li><li>Hyperparameters: _55sl_55tp: Likely indicates the stop-loss (SL) and take-profit (TP) values used in your backtesting strategy.</li><li>Data Info: _eth_15m: Possibly refers to the data being Ethereum (ETH) prices with a 15-minute time frame.</li><li>Date: _may_13th: The date the model was trained (May 13th).</li><li>Performance Metric: _ShRa_{round(stat_1[&#39;Sharpe Ratio&#39;],2)}: Appends the Sharpe Ratio from the backtesting results (stat_1), rounded to two decimal places.</li><li>File Extension: .keras: Standard extension for Keras models.</li></ul><p><strong>3. Saving the Model:</strong></p><ul><li>save_model(model_transformer, filename): This line saves your trained model_transformer to the specified file with the constructed filename.</li></ul><p><strong>Key Points:</strong></p><ul><li>This approach provides a clear and informative way to save our model, including details about its training parameters, data, and performance.</li><li>You can modify the filename structure to include additional information relevant to your needs.</li></ul><h3>Let’s Backtest entire data with saved model:</h3><pre>from keras.models import load_model<br><br># # Load the ensemble_predict function using joblib<br>best_model = load_model(&#39;./models/cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.68_time_20240527170917.keras&#39;)</pre><p><strong>Intended Functionality:</strong></p><ol><li><strong>Import:</strong></li></ol><ul><li>load_model from keras.models is used to load a saved model.</li></ul><p><strong>2. Loading the Model:</strong></p><ul><li>best_model = load_model(&#39;./models/transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;): This line attempts to load a model saved with the filename transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras from the directory ./models/.</li></ul><pre>df_ens = df.copy() <br><br># df_ens = df_ens_test[:len(X)]<br>y_pred = best_model.predict(X)<br><br># Perform any necessary post-processing on y_pred if needed<br># For example, if your model outputs probabilities, you might convert them to class labels using argmax:<br># y_pred_classes = np.argmax(y_pred, axis=1)<br># y_pred = np.argmax(y_pred, axis=1) # for lstm, tcn, cnn models<br>y_pred = np.argmax(y_pred, axis=2) # for transformers model<br>df_ens[&#39;best_model&#39;] =  y_pred<br>df_ens[&#39;bm&#39;] = df_ens[&#39;best_model&#39;].shift(1).dropna().astype(int)<br>df_ens[&#39;ema_22&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=22)<br>df_ens[&#39;ema_55&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=55)<br>df_ens[&#39;ema_108&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=108)<br>df_ens = df_ens.dropna()<br>df_ens[&#39;bm&#39;]<br>df_ens = df_ens.reset_index(inplace=False)<br>df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>df_ens.set_index(&#39;Date&#39;, inplace=True)<br>def SIGNAL_010(df_ens):<br>    return df_ens[&#39;bm&#39;]<br>def SIGNAL_0122(df_ens):<br>    return df_ens[&#39;ema_22&#39;]<br>def SIGNAL_0155(df_ens):<br>    return df_ens[&#39;ema_55&#39;]<br>def SIGNAL_01108(df_ens):<br>    return df_ens[&#39;ema_108&#39;]<br>class MyCandlesStrat_010(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1_1 = self.I(SIGNAL_010, self.data)<br>        self.ema_1_22 = self.I(SIGNAL_0122, self.data)<br>        self.ema_1_55 = self.I(SIGNAL_0155, self.data)<br>        self.ema_1_108 = self.I(SIGNAL_01108, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        # if (self.signal1_1 == 1) and (self.data.Close &gt; self.ema_1_22) and (self.ema_1_22 &gt; self.ema_1_55) and (self.ema_1_55 &gt; self.ema_1_108):<br>        #     sl_pct = 0.025  # 10% stop-loss<br>        #     tp_pct = 0.025  # 2.5% take-profit<br>        #     sl_price = self.data.Close[-1] * (1 - sl_pct)<br>        #     tp_price = self.data.Close[-1] * (1 + tp_pct)<br>        #     self.buy(sl=sl_price, tp=tp_price)<br>        # elif (self.signal1_1 == 2)  and (self.data.Close &lt; self.ema_1_22) and (self.ema_1_22 &lt; self.ema_1_55) and (self.ema_1_55 &lt; self.ema_1_108):<br>        #     sl_pct = 0.025  # 10% stop-loss<br>        #     tp_pct = 0.025  # 2.5% take-profit<br>        #     sl_price = self.data.Close[-1] * (1 + sl_pct)<br>        #     tp_price = self.data.Close[-1] * (1 - tp_pct)<br>        #     self.sell(sl=sl_price, tp=tp_price)<br>            <br>    # def next(self):<br>    #     super().next() <br>    #     if (self.signal1_1 == 1) and (self.ema_1_22 &gt; self.ema_1_55) and (self.ema_1_55 &gt; self.ema_1_108):<br>    #         sl_pct = 0.025  # 10% stop-loss<br>    #         tp_pct = 0.025  # 2.5% take-profit<br>    #         sl_price = self.data.Close[-1] * (1 - sl_pct)<br>    #         tp_price = self.data.Close[-1] * (1 + tp_pct)<br>    #         self.buy(sl=sl_price, tp=tp_price)<br>    #     elif (self.signal1_1 == 2) and (self.ema_1_22 &lt; self.ema_1_55) and (self.ema_1_55 &lt; self.ema_1_108):<br>    #         sl_pct = 0.025  # 10% stop-loss<br>    #         tp_pct = 0.025  # 2.5% take-profit<br>    #         sl_price = self.data.Close[-1] * (1 + sl_pct)<br>    #         tp_price = self.data.Close[-1] * (1 - tp_pct)<br>    #         self.sell(sl=sl_price, tp=tp_price)<br>            <br>        if (self.signal1_1 == 1):<br>            sl_pct = 0.035  # 10% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif (self.signal1_1 == 2):<br>            sl_pct = 0.035  # 10% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)<br>            <br>bt_010 = Backtest(df_ens, MyCandlesStrat_010, cash=100000, commission=.001)<br>stat_010 = bt_010.run()<br>stat_010</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*PvgDBzzEJxAnmD1lXzKdrg.png" /><figcaption>backtest 1000+ days of ETH 30m timeframe classification model of 2d CNN neural network model</figcaption></figure><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>This code builds upon your previous strategy by incorporating a Transformer model prediction (&#39;best_model&#39;) along with Exponential Moving Averages (EMAs) to generate buy and sell signals for a backtesting strategy. Here&#39;s a breakdown:</p><p><strong>1. Data Preparation:</strong></p><ul><li>df_ens = df.copy(): Creates a copy of the original DataFrame (df).</li><li>y_pred = best_model.predict(X): Makes predictions on the entire DataFrame (X) using your loaded Transformer model (best_model).</li><li>df_ens[&#39;best_model&#39;] = y_pred: Adds a new column &#39;best_model&#39; to the DataFrame containing the model predictions.</li><li>df_ens[&#39;bm&#39;] = df_ens[&#39;best_model&#39;].shift(1).dropna().astype(int): Similar to before, this creates a shifted signal column &#39;bm&#39; based on the predicted labels, but here it might include predictions for the entire DataFrame.</li><li>df_ens[&#39;ema_22&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=22): Calculates the 22-period EMA for the &#39;Close&#39; price and adds it as a new column &#39;ema_22&#39;.</li><li>df_ens[&#39;ema_55&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=55): Similar to above, calculates the 55-period EMA and adds it as &#39;ema_55&#39;.</li><li>df_ens[&#39;ema_108&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=108): Calculates the 108-period EMA and adds it as &#39;ema_108&#39;.</li><li>df_ens = df_ens.dropna(): Removes rows with missing values (likely the first row due to shifting).</li></ul><p><strong>2. Signal Functions (Outside the Code Block):</strong></p><ul><li>These functions (SIGNAL_010, SIGNAL_0122, etc.) simply return the corresponding columns from the DataFrame (&#39;bm&#39;, &#39;ema_22&#39;, etc.) used for generating the signals.</li></ul><p><strong>3. Backtesting Strategy Class (</strong><strong>MyCandlesStrat_010):</strong></p><ul><li>Inherits from Strategy.</li><li>def init(self): Initializes indicators for the Transformer model predictions (self.signal1_1) and EMAs (self.ema_1_22, etc.).</li></ul><p><strong>4. Backtesting Logic (in </strong><strong>next function):</strong></p><ul><li>The commented-out section shows a more complex logic considering the relationship between the Transformer predictions and the EMAs for buy/sell decisions.</li><li>The current active section uses a simpler approach:</li><li>If self.signal1_1 (Transformer prediction) is 1 (long):</li><li>Buy with stop-loss (SL) at 3.5% below current close and take-profit (TP) at 2.5% above.</li><li>If self.signal1_1 is 2 (short):</li><li>Sell with SL at 3.5% above current close and TP at 2.5% below.</li></ul><p><strong>5. Backtesting and Results:</strong></p><ul><li>bt_010 = Backtest(df_ens, MyCandlesStrat_010, cash=100000, commission=.001): Creates a backtest object using the DataFrame, strategy class, and other parameters.</li><li>stat_010 = bt_010.run(): Runs the backtest and stores the results in stat_010.</li><li>stat_010: This variable likely contains the backtesting statistics you can analyze.</li></ul><p><strong>Key Points:</strong></p><ul><li>This strategy combines predictions from our Transformer model with technical indicators (EMAs) for generating signals.</li><li>You can experiment with different conditions in the next function to create more sophisticated trading strategies.</li><li>Remember that backtesting results may not guarantee future performance, and proper risk management is crucial for real-world trading</li></ul><h4>Conclusion for 2D CNN Model:</h4><p>The given code sets up a 2D CNN model with hyperparameter tuning using Keras Tuner. The model is designed to handle multi-class classification for time series data of cryptocurrencies. By leveraging 2D CNNs, the model can capture intricate patterns across both time and features, potentially leading to more accurate and robust predictions compared to 1D CNNs. The use of hyperparameter tuning ensures that the model is optimized for the given task, further enhancing its performance.</p><h3>Applying 2D CNN Model for Other Assets and Short List the Best:</h3><p>From here on we will explain about how to use the same trained model to short list best assets after doing certain backtest on all the assets after downloading the data from tradingview for backtest</p><h4>Importing Necessary packages and setting up Model &amp; Exchange APi with CCXT</h4><pre>import time<br>import logging<br>import io<br>import contextlib<br>import glob<br>import ccxt<br>from datetime import datetime, timedelta, timezone<br>import keras<br>from keras.models import save_model, load_model<br>import numpy as np<br>import pandas as pd<br>import talib as ta<br>from sklearn.preprocessing import MinMaxScaler<br>import warnings<br>from threading import Thread, Event<br>import decimal<br>import joblib<br>from tcn import TCN<br><br># from pandas.core.computation import PerformanceWarning<br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br># NOTE: Train your own model from the other notebook I have shared and use the most successful trained model here.<br># model_file_path = &#39;./model_lstm_1tp_1sl_2p5SlTp_April_5th_ShRa_1_49_15m.hdf5&#39;<br>model_file_path = &#39;./models/transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;<br>model_name = model_file_path.split(&#39;/&#39;)[-1]<br>##################################### TO Load A Model #######################################<br># NOTE: for LSTM based neural network model  you can directly load_model with model_file_path as given below<br># Load your pre-trained model, keras trained model will only take load_model from keras.models and not from joblib<br>model = load_model(model_file_path)<br># # or<br># model = tf.keras.models.load_model(model_file_path)<br># NOTE: for TCN based neural network model, you need to add custom_objects while loading the model, it is given below<br># # Define a dictionary to specify custom objects<br># custom_objects = {&#39;TCN&#39;: TCN}<br># model = load_model(model_file_path, custom_objects = custom_objects)<br><br>##########################################################################################<br>########################## Adding the exchange information ##############################<br>exchange = ccxt.binanceusdm(<br>    {<br>        &#39;enableRateLimit&#39;: True,  # required by the Manual<br>        # Add any other authentication parameters if needed<br>        &#39;rateLimit&#39;: 250, &#39;verbose&#39;: True<br>    }<br>    )<br># NOTE: I used https://testnet.binancefuture.com/en/futures/BTCUSDT for testnet API (this has very bad liquidity issue for various assets and many other issues but can be used for purely testiug purpose)<br>#  kraken testnet creds pubkey - K9dS2SK8JURMl9F300lguUhOS/ao3HM+tfRMgJGed+JhDfpJhvsC/y           privatekey - /J/03PPyPwsrPsKZYtLqOQNPLKZJattT6i15Bpg14/6ALokHHY/MBb1p6tYKyFgkKXIJIOMbBsFRfL3aBZUvQ1<br># api_key = &#39;8f7080f8821b58a53f5c49f00cbff7fdccca9c9154ea&#39;<br># secret_key = &#39;1e58391a46a7dbb098aa5121d3e69e3a6660ba8c38f&#39;<br><br># exchange.apiKey = api_key<br># exchange.secret = secret_key<br># exchange.set_sandbox_mode(True)<br><br># NOTE: if u want to go live, un commenb below 5 lines and comment 5 lines above and change to your own api_key and secret_key (below one ius a dummy and also make sure to give &quot;futres&quot; permission while creating your api in the exchange)<br>api_key = &#39;CxUdC80c3Y5Nf1iRJMZJelOCfFJWISbQsasCb4Zdskx7MM8uCl&#39;<br>secret_key = &#39;p4XwsZwmmNswzDHzE5TSUOgXT5tASArfSOfYrBMtezlCpDGtz&#39;<br>exchange.apiKey = api_key<br>exchange.secret = secret_key<br>exchange.set_sandbox_mode(False)<br>#######################################################################################<br>    # exchange.set_sandbox_mode(True)<br>exchange.has<br># exchange.fetchBalance()[&quot;info&quot;][&quot;assets&quot;]<br>exchange.options = {&#39;defaultType&#39;: &#39;future&#39;, # or &#39;margin&#39; or &#39;spot&#39;<br>                    &#39;timeDifference&#39;: 0,  # Set an appropriate initial value for time difference<br>                        &#39;adjustForTimeDifference&#39;: True,<br>                        &#39;newOrderRespType&#39;: &#39;FULL&#39;,<br>                        &#39;defaultTimeInForce&#39;: &#39;GTC&#39;}</pre><p>The provided code snippet demonstrates how to load our trained model and connect to a cryptocurrency exchange (Binance) for potential shortlisting of assets based on backtesting. Here’s a breakdown:</p><p><strong>Imports:</strong></p><ul><li>Standard libraries for time, logging, data manipulation (pandas, numpy), machine learning (Keras, scikit-learn), technical indicators (talib), threading, and others.</li></ul><p><strong>Model Loading:</strong></p><ul><li>Comments explain the difference in loading a model based on its type:</li><li><strong>LSTM Model:</strong> Uses load_model from keras.models directly (as shown in your code).</li><li><strong>TCN Model:</strong> Requires specifying custom objects (custom_objects={&#39;TCN&#39;: TCN}) during loading.</li></ul><p><strong>Exchange Connection:</strong></p><ul><li>Creates a ccxt.binanceusdm object (exchange) to interact with the Binance exchange.</li><li>Sets API credentials and enables rate limiting for responsible API usage.</li><li>Comments mention testnet and live API usage options.</li></ul><p><strong>Important Notes:</strong></p><ul><li><strong>Replace API Keys:</strong> Replace the dummy api_key and secret_key with your actual Binance API credentials (if going live). Ensure your API has &quot;futures&quot; permission.</li><li><strong>Backtesting Not Shown:</strong> This code focuses on model loading and exchange connection. The actual backtesting loop and asset shortlisting logic are not included.</li></ul><p><strong>Next Steps:</strong></p><ol><li><strong>Backtesting Loop:</strong> You’ll need to implement a loop to iterate through your desired assets:</li></ol><ul><li>Download historical data from the exchange (using exchange.fetch_ohlcv) for each asset.</li><li>Preprocess the data (scaling, feature engineering).</li><li>Make predictions using your loaded model (model.predict).</li><li>Apply your backtesting strategy (similar to previous examples) incorporating predictions and potentially technical indicators.</li><li>Store backtesting results for each asset.</li></ul><ol><li><strong>Shortlisting:</strong> Analyze the stored backtesting results and apply filters/sorting based on your chosen metrics to shortlist the best-performing assets.</li><li><strong>Risk Management:</strong> Remember, backtesting is for evaluation, not a guarantee of future success. Implement proper risk management strategies before using these shortlisted assets in real trading.</li></ol><pre>from sklearn.preprocessing import MinMaxScaler<br>from backtesting import Strategy, Backtest<br>import os<br>import json<br>import pandas as pd<br>import talib as ta<br>import numpy as np<br>from concurrent.futures import ThreadPoolExecutor<br>import threading<br><br>import time<br>import ccxt<br>from keras.models import save_model, load_model<br>import warnings<br>import decimal<br>import joblib<br>import nest_asyncio<br># from pandas.core.computation import PerformanceWarning<br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br># Load your pre-trained model<br># model = load_model(&#39;best_model_tcn_1sl_1tp_2p5SlTp_success.pkl&#39;)<br># Define the custom_assets dictionary outside the loop<br>custom_assets = {}<br># Function to load custom_assets from a text file<br>def load_custom_assets():<br>    if os.path.exists(&#39;custom_assets.txt&#39;):<br>        try:<br>            with open(&#39;custom_assets.txt&#39;, &#39;r&#39;) as txt_file:<br>                return json.loads(txt_file.read())<br>        except json.JSONDecodeError as e:<br>            print(f&quot;Error decoding JSON in custom_assets.txt: {e}&quot;)<br>            return {}<br>    else:<br>        print(&quot;custom_assets.txt file not found. Initializing an empty dictionary.&quot;)<br>        custom_assets = {}<br>        save_custom_assets(custom_assets)<br>        return custom_assets<br># Define a threading lock<br>file_lock = threading.Lock()<br># Function to save custom_assets to a text file<br>def save_custom_assets(custom_assets):<br>    with file_lock:<br>        with open(&#39;custom_assets.txt&#39;, &#39;w&#39;) as txt_file:<br>            json.dump(custom_assets, txt_file, indent=4)</pre><p>The provided code focuses on managing custom assets and preparing for multi-threaded backtesting. Here’s a breakdown:</p><p><strong>Imports:</strong></p><ul><li>Includes libraries for data manipulation (pandas, numpy), technical indicators (talib), backtesting framework (backtesting), threading, and others.</li></ul><p><strong>Custom Assets Management:</strong></p><p>custom_assets dictionary:</p><ul><li>Stores custom assets for backtesting (likely symbols or names).</li></ul><p>load_custom_assets function:</p><ul><li>Checks for a file named custom_assets.txt.</li><li>If the file exists, attempts to load the dictionary from the JSON content. Handles potential JSON decoding errors.</li><li>If the file doesn’t exist, initializes an empty dictionary, saves it, and returns it.</li></ul><p>save_custom_assets function:</p><ul><li>Uses a threading lock (file_lock) to ensure safe access to the file during potential concurrent writes.</li><li>Saves the custom_assets dictionary as JSON to the custom_assets.txt file.</li></ul><p><strong>Next Steps:</strong></p><ol><li><strong>Backtesting Function:</strong> You’ll likely define a function for the backtesting logic. This function would:</li></ol><ul><li>Take an asset symbol as input.</li><li>Download historical data for the asset.</li><li>Preprocess the data (scaling, feature engineering).</li><li>Make predictions using your loaded model.</li><li>Apply your backtesting strategy (similar to previous examples) incorporating predictions and potentially technical indicators.</li><li>Calculate and store backtesting results (Sharpe Ratio, drawdown, etc.) for the asset.</li></ul><p><strong>2. Multithreaded Backtesting:</strong></p><ul><li>You can utilize the ThreadPoolExecutor and threading capabilities to download and backtest multiple assets simultaneously. This can significantly improve efficiency compared to a sequential approach.</li><li>The custom_assets dictionary and its management functions will be crucial for providing asset symbols to the backtesting function within the thread pool.</li></ul><p><strong>Additional Notes:</strong></p><ul><li>Remember to replace &#39;best_model_tcn_1sl_1tp_2p5SlTp_success.pkl&#39; with the actual path to your trained model file.</li><li>Consider error handling and logging mechanisms for potential issues during data download, backtesting calculations, or thread management.</li></ul><pre>#NOTE: Fetching from binance Futures perpetual USDT assets , if error 4xx accours, it means, there is some restriction from your government or VPN server is connected to restrcited area for binance to work. You can use assets from the collection given by me in next cell<br><br>import requests<br>def get_binance_futures_assets():<br>    url = &quot;https://fapi.binance.com/fapi/v1/exchangeInfo&quot;<br>    try:<br>        response = requests.get(url)<br>        response.raise_for_status()  # Raise an exception for 4xx and 5xx status codes<br>        data = response.json()<br>        assets = [asset[&#39;symbol&#39;] for asset in data[&#39;symbols&#39;] if asset[&#39;contractType&#39;] == &#39;PERPETUAL&#39; and asset[&#39;quoteAsset&#39;] == &#39;USDT&#39;]<br>        return assets<br>    except requests.exceptions.RequestException as e:<br>        print(&quot;Failed to fetch Binance futures assets:&quot;, e)<br>        return []<br># Get all Binance futures USDT perpetual assets<br>futures_assets = get_binance_futures_assets()<br>print(&quot;Binance Futures USDT Perpetual Assets:&quot;)<br>print(futures_assets, len(futures_assets))</pre><pre>output:<br>&#39;BTCUSDT.P&#39;, &#39;ETHUSDT.P&#39;, &#39;BCHUSDT.P&#39;, &#39;XRPUSDT.P&#39;, &#39;EOSUSDT.P&#39;, &#39;LTCUSDT.P&#39;, &#39;TRXUSDT.P&#39;, &#39;ETCUSDT.P&#39;, <br>        &#39;LINKUSDT.P&#39;, &#39;XLMUSDT.P&#39;, &#39;ADAUSDT.P&#39;, &#39;XMRUSDT.P&#39;, &#39;DASHUSDT.P&#39;, &#39;ZECUSDT.P&#39;, &#39;XTZUSDT.P&#39;, &#39;BNBUSDT.P&#39;, <br>        &#39;ATOMUSDT.P&#39;, &#39;ONTUSDT.P&#39;, &#39;IOTAUSDT.P&#39;, &#39;BATUSDT.P&#39;, &#39;VETUSDT.P&#39;, &#39;NEOUSDT.P&#39;, &#39;QTUMUSDT.P&#39;, &#39;IOSTUSDT.P&#39;, <br>        &#39;THETAUSDT.P&#39;, &#39;ALGOUSDT.P&#39;, &#39;ZILUSDT.P&#39;, &#39;KNCUSDT.P&#39;, &#39;ZRXUSDT.P&#39;, &#39;COMPUSDT.P&#39;, &#39;OMGUSDT.P&#39;, &#39;DOGEUSDT.P&#39;, <br>        &#39;SXPUSDT.P&#39;, &#39;KAVAUSDT.P&#39;, &#39;BANDUSDT.P&#39;, &#39;RLCUSDT.P&#39;, &#39;WAVESUSDT.P&#39;, &#39;MKRUSDT.P&#39;, &#39;SNXUSDT.P&#39;, &#39;DOTUSDT.P&#39;, <br>        &#39;DEFIUSDT.P&#39;, &#39;YFIUSDT.P&#39;, &#39;BALUSDT.P&#39;, &#39;CRVUSDT.P&#39;, &#39;TRBUSDT.P&#39;, &#39;RUNEUSDT.P&#39;, &#39;SUSHIUSDT.P&#39;, &#39;SRMUSDT.P&#39;, <br>        &#39;EGLDUSDT.P&#39;, &#39;SOLUSDT.P&#39;, &#39;ICXUSDT.P&#39;, &#39;STORJUSDT.P&#39;, &#39;BLZUSDT.P&#39;, &#39;UNIUSDT.P&#39;, &#39;AVAXUSDT.P&#39;, &#39;FTMUSDT.P&#39;, <br>        &#39;HNTUSDT.P&#39;, &#39;ENJUSDT.P&#39;, &#39;FLMUSDT.P&#39;, &#39;TOMOUSDT.P&#39;, &#39;RENUSDT.P&#39;, &#39;KSMUSDT.P&#39;, &#39;NEARUSDT.P&#39;, &#39;AAVEUSDT.P&#39;, <br>        &#39;FILUSDT.P&#39;, &#39;RSRUSDT.P&#39;, &#39;LRCUSDT.P&#39;, &#39;MATICUSDT.P&#39;, &#39;OCEANUSDT.P&#39;, &#39;CVCUSDT.P&#39;, &#39;BELUSDT.P&#39;, &#39;CTKUSDT.P&#39;, <br>        &#39;AXSUSDT.P&#39;, &#39;ALPHAUSDT.P&#39;, &#39;ZENUSDT.P&#39;, &#39;SKLUSDT.P&#39;, &#39;GRTUSDT.P&#39;, &#39;1INCHUSDT.P&#39;, &#39;CHZUSDT.P&#39;, &#39;SANDUSDT.P&#39;, <br>        &#39;ANKRUSDT.P&#39;, &#39;BTSUSDT.P&#39;, &#39;LITUSDT.P&#39;, &#39;UNFIUSDT.P&#39;, &#39;REEFUSDT.P&#39;, &#39;RVNUSDT.P&#39;, &#39;SFPUSDT.P&#39;, &#39;XEMUSDT.P&#39;, <br>        &#39;COTIUSDT.P&#39;, &#39;CHRUSDT.P&#39;, &#39;MANAUSDT.P&#39;, &#39;ALICEUSDT.P&#39;, &#39;HBARUSDT.P&#39;, &#39;ONEUSDT.P&#39;, &#39;LINAUSDT.P&#39;, &#39;STMXUSDT.P&#39;, <br>        &#39;DENTUSDT.P&#39;, &#39;CELRUSDT.P&#39;, &#39;HOTUSDT.P&#39;, &#39;MTLUSDT.P&#39;, &#39;OGNUSDT.P&#39;, &#39;NKNUSDT.P&#39;, &#39;SCUSDT.P&#39;, &#39;DGBUSDT.P&#39;, <br>        &#39;1000SHIBUSDT.P&#39;, &#39;BAKEUSDT.P&#39;, &#39;GTCUSDT.P&#39;, &#39;BTCDOMUSDT.P&#39;, &#39;IOTXUSDT.P&#39;, &#39;AUDIOUSDT.P&#39;, &#39;RAYUSDT.P&#39;, &#39;C98USDT.P&#39;, <br>        &#39;MASKUSDT.P&#39;, &#39;ATAUSDT.P&#39;, &#39;DYDXUSDT.P&#39;, &#39;1000XECUSDT.P&#39;, &#39;GALAUSDT.P&#39;, &#39;CELOUSDT.P&#39;, &#39;ARUSDT.P&#39;, &#39;KLAYUSDT.P&#39;, <br>        &#39;ARPAUSDT.P&#39;, &#39;CTSIUSDT.P&#39;, &#39;LPTUSDT.P&#39;, &#39;ENSUSDT.P&#39;, &#39;PEOPLEUSDT.P&#39;, &#39;ANTUSDT.P&#39;, &#39;ROSEUSDT.P&#39;, &#39;DUSKUSDT.P&#39;, <br>        &#39;FLOWUSDT.P&#39;, &#39;IMXUSDT.P&#39;, &#39;API3USDT.P&#39;, &#39;GMTUSDT.P&#39;, &#39;APEUSDT.P&#39;, &#39;WOOUSDT.P&#39;, &#39;FTTUSDT.P&#39;, &#39;JASMYUSDT.P&#39;, &#39;DARUSDT.P&#39;, <br>        &#39;GALUSDT.P&#39;, &#39;OPUSDT.P&#39;, &#39;INJUSDT.P&#39;, &#39;STGUSDT.P&#39;, &#39;FOOTBALLUSDT.P&#39;, &#39;SPELLUSDT.P&#39;, &#39;1000LUNCUSDT.P&#39;, <br>        &#39;LUNA2USDT.P&#39;, &#39;LDOUSDT.P&#39;, &#39;CVXUSDT.P&#39;, &#39;ICPUSDT.P&#39;, &#39;APTUSDT.P&#39;, &#39;QNTUSDT.P&#39;, &#39;BLUEBIRDUSDT.P&#39;, &#39;FETUSDT.P&#39;, <br>        &#39;FXSUSDT.P&#39;, &#39;HOOKUSDT.P&#39;, &#39;MAGICUSDT.P&#39;, &#39;TUSDT.P&#39;, &#39;RNDRUSDT.P&#39;, &#39;HIGHUSDT.P&#39;, &#39;MINAUSDT.P&#39;, &#39;ASTRUSDT.P&#39;, <br>        &#39;AGIXUSDT.P&#39;, &#39;PHBUSDT.P&#39;, &#39;GMXUSDT.P&#39;, &#39;CFXUSDT.P&#39;, &#39;STXUSDT.P&#39;, &#39;COCOSUSDT.P&#39;, &#39;BNXUSDT.P&#39;, &#39;ACHUSDT.P&#39;, <br>        &#39;SSVUSDT.P&#39;, &#39;CKBUSDT.P&#39;, &#39;PERPUSDT.P&#39;, &#39;TRUUSDT.P&#39;, &#39;LQTYUSDT.P&#39;, &#39;USDCUSDT.P&#39;, &#39;IDUSDT.P&#39;, &#39;ARBUSDT.P&#39;, <br>        &#39;JOEUSDT.P&#39;, &#39;TLMUSDT.P&#39;, &#39;AMBUSDT.P&#39;, &#39;LEVERUSDT.P&#39;, &#39;RDNTUSDT.P&#39;, &#39;HFTUSDT.P&#39;, &#39;XVSUSDT.P&#39;, &#39;BLURUSDT.P&#39;, <br>        &#39;EDUUSDT.P&#39;, &#39;IDEXUSDT.P&#39;, &#39;SUIUSDT.P&#39;, &#39;1000PEPEUSDT.P&#39;, &#39;1000FLOKIUSDT.P&#39;, &#39;UMAUSDT.P&#39;, &#39;RADUSDT.P&#39;, <br>        &#39;KEYUSDT.P&#39;, &#39;COMBOUSDT.P&#39;, &#39;NMRUSDT.P&#39;, &#39;MAVUSDT.P&#39;, &#39;MDTUSDT.P&#39;, &#39;XVGUSDT.P&#39;, &#39;WLDUSDT.P&#39;, &#39;PENDLEUSDT.P&#39;, <br>        &#39;ARKMUSDT.P&#39;, &#39;AGLDUSDT.P&#39;, &#39;YGGUSDT.P&#39;, &#39;DODOXUSDT.P&#39;, &#39;BNTUSDT.P&#39;, &#39;OXTUSDT.P&#39;, &#39;SEIUSDT.P&#39;, &#39;CYBERUSDT.P&#39;, <br>        &#39;HIFIUSDT.P&#39;, &#39;ARKUSDT.P&#39;, &#39;FRONTUSDT.P&#39;, &#39;GLMRUSDT.P&#39;, &#39;BICOUSDT.P&#39;, &#39;STRAXUSDT.P&#39;, &#39;LOOMUSDT.P&#39;, &#39;BIGTIMEUSDT.P&#39;, <br>        &#39;BONDUSDT.P&#39;, &#39;ORBSUSDT.P&#39;, &#39;STPTUSDT.P&#39;, &#39;WAXPUSDT.P&#39;, &#39;BSVUSDT.P&#39;, &#39;RIFUSDT.P&#39;, &#39;POLYXUSDT.P&#39;, &#39;GASUSDT.P&#39;, <br>        &#39;POWRUSDT.P&#39;, &#39;SLPUSDT.P&#39;, &#39;TIAUSDT.P&#39;, &#39;SNTUSDT.P&#39;, &#39;CAKEUSDT.P&#39;, &#39;MEMEUSDT.P&#39;, &#39;TWTUSDT.P&#39;, &#39;TOKENUSDT.P&#39;, <br>        &#39;ORDIUSDT.P&#39;, &#39;STEEMUSDT.P&#39;, &#39;BADGERUSDT.P&#39;, &#39;ILVUSDT.P&#39;, &#39;NTRNUSDT.P&#39;, &#39;MBLUSDT.P&#39;, &#39;KASUSDT.P&#39;, &#39;BEAMXUSDT.P&#39;, <br>        &#39;1000BONKUSDT.P&#39;, &#39;PYTHUSDT.P&#39;, &#39;SUPERUSDT.P&#39;, &#39;USTCUSDT.P&#39;, &#39;ONGUSDT.P&#39;, &#39;ETHWUSDT.P&#39;, &#39;JTOUSDT.P&#39;, &#39;1000SATSUSDT.P&#39;, <br>        &#39;AUCTIONUSDT.P&#39;, &#39;1000RATSUSDT.P&#39;, &#39;ACEUSDT.P&#39;, &#39;MOVRUSDT.P&#39;, &#39;NFPUSDT.P&#39;, &#39;AIUSDT.P&#39;, &#39;XAIUSDT.P&#39;, <br>        &#39;WIFUSDT.P&#39;, &#39;MANTAUSDT.P&#39;, &#39;ONDOUSDT.P&#39;, &#39;LSKUSDT.P&#39;, &#39;ALTUSDT.P&#39;, &#39;JUPUSDT.P&#39;, &#39;ZETAUSDT.P&#39;, &#39;RONINUSDT.P&#39;, <br>        &#39;DYMUSDT.P&#39;, &#39;OMUSDT.P&#39;, &#39;PIXELUSDT.P&#39;, &#39;STRKUSDT.P&#39;, &#39;MAVIAUSDT.P&#39;, &#39;GLMUSDT.P&#39;, &#39;PORTALUSDT.P&#39;, &#39;TONUSDT.P&#39;, <br>        &#39;AXLUSDT.P&#39;, &#39;MYROUSDT.P&#39;, &#39;METISUSDT.P&#39;, &#39;AEVOUSDT.P&#39;, &#39;VANRYUSDT.P&#39;, &#39;BOMEUSDT.P&#39;, &#39;ETHFIUSDT.P&#39;, &#39;ENAUSDT.P&#39;, <br>        &#39;WUSDT.P&#39;, &#39;TNSRUSDT.P&#39;, &#39;SAGAUSDT.P&#39;, &#39;TAOUSDT.P&#39;, &#39;OMNIUSDT.P&#39;, &#39;REZUSDT.P&#39;</pre><p>This code snippet retrieves a list of perpetual USDT contracts available on Binance Futures using the official Binance API. Here’s a breakdown:</p><p><strong>Function:</strong></p><p>get_binance_futures_assets function:</p><ul><li>Defines the API endpoint URL for retrieving exchange information.</li><li>Uses a try-except block to handle potential errors during the request.</li></ul><p>Within the try block:</p><ul><li>Makes a GET request to the Binance API endpoint.</li><li>Raises an exception for status codes in the 4xx (client errors) or 5xx (server errors) range to indicate failures.</li><li>Parses the JSON response from the successful request.</li></ul><p>Extracts symbols from the response data:</p><ul><li>Iterates through the &#39;symbols&#39; list in the JSON data.</li></ul><p>Filters for assets with these criteria:</p><ul><li>&#39;contractType&#39; is &#39;PERPETUAL&#39; (indicates perpetual contracts).</li><li>&#39;quoteAsset&#39; is &#39;USDT&#39; (indicates USDT-quoted contracts).</li><li>Creates a list of asset symbols meeting the criteria and returns it.</li><li>The except block catches potential request exceptions and prints an error message. It also returns an empty list in case of failures.</li></ul><p><strong>Printing Results:</strong></p><ul><li>Calls the get_binance_futures_assets function to retrieve the asset list.</li><li>Prints a message indicating the retrieved assets and their count.</li></ul><p><strong>Additional Notes:</strong></p><ul><li>This approach leverages the official Binance API, which might be subject to rate limits or changes in the future. Consider implementing appropriate error handling and retry mechanisms.</li><li>The code assumes a successful API call. You might want to add checks for specific error codes (e.g., 429 for “Too Many Requests”) and handle them gracefully (e.g., retrying after a delay).</li></ul><pre># !pip install --upgrade --no-cache-dir git+https://github.com/rongardF/tvdatafeed.git<br><br>import os<br>import json<br>import asyncio<br>from datetime import datetime, timedelta<br>import pandas as pd<br>from tvDatafeed import TvDatafeed, Interval<br># Initialize TvDatafeed object<br># username = &#39;YourTradingViewUsername&#39;<br># password = &#39;YourTradingViewPassword&#39;<br># tv = TvDatafeed(username, password)<br>tv = TvDatafeed()<br>timeframe = &#39;15m&#39;<br>interval = None<br>if timeframe == &#39;1m&#39;:<br>    interval = Interval.in_1_minute<br>elif timeframe == &#39;3m&#39;:<br>    interval = Interval.in_3_minute<br>elif timeframe == &#39;5m&#39;:<br>    interval = Interval.in_5_minute<br>elif timeframe == &#39;15m&#39;:<br>    interval = Interval.in_15_minute<br>elif timeframe == &#39;30m&#39;:<br>    interval = Interval.in_30_minute<br>elif timeframe == &#39;45m&#39;:<br>    interval = Interval.in_45_minute<br>elif timeframe == &#39;1h&#39;:<br>    interval = Interval.in_1_hour<br>elif timeframe == &#39;2h&#39;:<br>    interval = Interval.in_2_hour<br>elif timeframe == &#39;4h&#39;:<br>    interval = Interval.in_4_hour<br>elif timeframe == &#39;1d&#39;:<br>    interval = Interval.in_daily<br>elif timeframe == &#39;1w&#39;:<br>    interval = Interval.in_weekly<br>elif timeframe == &#39;1M&#39;:<br>    interval = Interval.in_monthly<br># NOTE: List of symbols around 126 mentioned here. You can change to your own set of lists if you know the tradingview code for the symbol you want to download.<br>data = [<br>    &#39;BTCUSDT.P&#39;, &#39;ETHUSDT.P&#39;, &#39;BCHUSDT.P&#39;, &#39;XRPUSDT.P&#39;, &#39;EOSUSDT.P&#39;, &#39;LTCUSDT.P&#39;, &#39;TRXUSDT.P&#39;, &#39;ETCUSDT.P&#39;, <br>        &#39;LINKUSDT.P&#39;, &#39;XLMUSDT.P&#39;, &#39;ADAUSDT.P&#39;, &#39;XMRUSDT.P&#39;, &#39;DASHUSDT.P&#39;, &#39;ZECUSDT.P&#39;, &#39;XTZUSDT.P&#39;, &#39;BNBUSDT.P&#39;, <br>        &#39;ATOMUSDT.P&#39;, &#39;ONTUSDT.P&#39;, &#39;IOTAUSDT.P&#39;, &#39;BATUSDT.P&#39;, &#39;VETUSDT.P&#39;, &#39;NEOUSDT.P&#39;, &#39;QTUMUSDT.P&#39;, &#39;IOSTUSDT.P&#39;, <br>        &#39;THETAUSDT.P&#39;, &#39;ALGOUSDT.P&#39;, &#39;ZILUSDT.P&#39;, &#39;KNCUSDT.P&#39;, &#39;ZRXUSDT.P&#39;, &#39;COMPUSDT.P&#39;, &#39;OMGUSDT.P&#39;, &#39;DOGEUSDT.P&#39;, <br>        &#39;SXPUSDT.P&#39;, &#39;KAVAUSDT.P&#39;, &#39;BANDUSDT.P&#39;, &#39;RLCUSDT.P&#39;, &#39;WAVESUSDT.P&#39;, &#39;MKRUSDT.P&#39;, &#39;SNXUSDT.P&#39;, &#39;DOTUSDT.P&#39;, <br>        &#39;DEFIUSDT.P&#39;, &#39;YFIUSDT.P&#39;, &#39;BALUSDT.P&#39;, &#39;CRVUSDT.P&#39;, &#39;TRBUSDT.P&#39;, &#39;RUNEUSDT.P&#39;, &#39;SUSHIUSDT.P&#39;, &#39;SRMUSDT.P&#39;, <br>        &#39;EGLDUSDT.P&#39;, &#39;SOLUSDT.P&#39;, &#39;ICXUSDT.P&#39;, &#39;STORJUSDT.P&#39;, &#39;BLZUSDT.P&#39;, &#39;UNIUSDT.P&#39;, &#39;AVAXUSDT.P&#39;, &#39;FTMUSDT.P&#39;, <br>        &#39;HNTUSDT.P&#39;, &#39;ENJUSDT.P&#39;, &#39;FLMUSDT.P&#39;, &#39;TOMOUSDT.P&#39;, &#39;RENUSDT.P&#39;, &#39;KSMUSDT.P&#39;, &#39;NEARUSDT.P&#39;, &#39;AAVEUSDT.P&#39;, <br>        &#39;FILUSDT.P&#39;, &#39;RSRUSDT.P&#39;, &#39;LRCUSDT.P&#39;, &#39;MATICUSDT.P&#39;, &#39;OCEANUSDT.P&#39;, &#39;CVCUSDT.P&#39;, &#39;BELUSDT.P&#39;, &#39;CTKUSDT.P&#39;, <br>        &#39;AXSUSDT.P&#39;, &#39;ALPHAUSDT.P&#39;, &#39;ZENUSDT.P&#39;, &#39;SKLUSDT.P&#39;, &#39;GRTUSDT.P&#39;, &#39;1INCHUSDT.P&#39;, &#39;CHZUSDT.P&#39;, &#39;SANDUSDT.P&#39;, <br>        &#39;ANKRUSDT.P&#39;, &#39;BTSUSDT.P&#39;, &#39;LITUSDT.P&#39;, &#39;UNFIUSDT.P&#39;, &#39;REEFUSDT.P&#39;, &#39;RVNUSDT.P&#39;, &#39;SFPUSDT.P&#39;, &#39;XEMUSDT.P&#39;, <br>        &#39;COTIUSDT.P&#39;, &#39;CHRUSDT.P&#39;, &#39;MANAUSDT.P&#39;, &#39;ALICEUSDT.P&#39;, &#39;HBARUSDT.P&#39;, &#39;ONEUSDT.P&#39;, &#39;LINAUSDT.P&#39;, &#39;STMXUSDT.P&#39;, <br>        &#39;DENTUSDT.P&#39;, &#39;CELRUSDT.P&#39;, &#39;HOTUSDT.P&#39;, &#39;MTLUSDT.P&#39;, &#39;OGNUSDT.P&#39;, &#39;NKNUSDT.P&#39;, &#39;SCUSDT.P&#39;, &#39;DGBUSDT.P&#39;, <br>        &#39;1000SHIBUSDT.P&#39;, &#39;BAKEUSDT.P&#39;, &#39;GTCUSDT.P&#39;, &#39;BTCDOMUSDT.P&#39;, &#39;IOTXUSDT.P&#39;, &#39;AUDIOUSDT.P&#39;, &#39;RAYUSDT.P&#39;, &#39;C98USDT.P&#39;, <br>        &#39;MASKUSDT.P&#39;, &#39;ATAUSDT.P&#39;, &#39;DYDXUSDT.P&#39;, &#39;1000XECUSDT.P&#39;, &#39;GALAUSDT.P&#39;, &#39;CELOUSDT.P&#39;, &#39;ARUSDT.P&#39;, &#39;KLAYUSDT.P&#39;, <br>        &#39;ARPAUSDT.P&#39;, &#39;CTSIUSDT.P&#39;, &#39;LPTUSDT.P&#39;, &#39;ENSUSDT.P&#39;, &#39;PEOPLEUSDT.P&#39;, &#39;ANTUSDT.P&#39;, &#39;ROSEUSDT.P&#39;, &#39;DUSKUSDT.P&#39;, <br>        &#39;FLOWUSDT.P&#39;, &#39;IMXUSDT.P&#39;, &#39;API3USDT.P&#39;, &#39;GMTUSDT.P&#39;, &#39;APEUSDT.P&#39;, &#39;WOOUSDT.P&#39;, &#39;FTTUSDT.P&#39;, &#39;JASMYUSDT.P&#39;, &#39;DARUSDT.P&#39;, <br>        &#39;GALUSDT.P&#39;, &#39;OPUSDT.P&#39;, &#39;INJUSDT.P&#39;, &#39;STGUSDT.P&#39;, &#39;FOOTBALLUSDT.P&#39;, &#39;SPELLUSDT.P&#39;, &#39;1000LUNCUSDT.P&#39;, <br>        &#39;LUNA2USDT.P&#39;, &#39;LDOUSDT.P&#39;, &#39;CVXUSDT.P&#39;, &#39;ICPUSDT.P&#39;, &#39;APTUSDT.P&#39;, &#39;QNTUSDT.P&#39;, &#39;BLUEBIRDUSDT.P&#39;, &#39;FETUSDT.P&#39;, <br>        &#39;FXSUSDT.P&#39;, &#39;HOOKUSDT.P&#39;, &#39;MAGICUSDT.P&#39;, &#39;TUSDT.P&#39;, &#39;RNDRUSDT.P&#39;, &#39;HIGHUSDT.P&#39;, &#39;MINAUSDT.P&#39;, &#39;ASTRUSDT.P&#39;, <br>        &#39;AGIXUSDT.P&#39;, &#39;PHBUSDT.P&#39;, &#39;GMXUSDT.P&#39;, &#39;CFXUSDT.P&#39;, &#39;STXUSDT.P&#39;, &#39;COCOSUSDT.P&#39;, &#39;BNXUSDT.P&#39;, &#39;ACHUSDT.P&#39;, <br>        &#39;SSVUSDT.P&#39;, &#39;CKBUSDT.P&#39;, &#39;PERPUSDT.P&#39;, &#39;TRUUSDT.P&#39;, &#39;LQTYUSDT.P&#39;, &#39;USDCUSDT.P&#39;, &#39;IDUSDT.P&#39;, &#39;ARBUSDT.P&#39;, <br>        &#39;JOEUSDT.P&#39;, &#39;TLMUSDT.P&#39;, &#39;AMBUSDT.P&#39;, &#39;LEVERUSDT.P&#39;, &#39;RDNTUSDT.P&#39;, &#39;HFTUSDT.P&#39;, &#39;XVSUSDT.P&#39;, &#39;BLURUSDT.P&#39;, <br>        &#39;EDUUSDT.P&#39;, &#39;IDEXUSDT.P&#39;, &#39;SUIUSDT.P&#39;, &#39;1000PEPEUSDT.P&#39;, &#39;1000FLOKIUSDT.P&#39;, &#39;UMAUSDT.P&#39;, &#39;RADUSDT.P&#39;, <br>        &#39;KEYUSDT.P&#39;, &#39;COMBOUSDT.P&#39;, &#39;NMRUSDT.P&#39;, &#39;MAVUSDT.P&#39;, &#39;MDTUSDT.P&#39;, &#39;XVGUSDT.P&#39;, &#39;WLDUSDT.P&#39;, &#39;PENDLEUSDT.P&#39;, <br>        &#39;ARKMUSDT.P&#39;, &#39;AGLDUSDT.P&#39;, &#39;YGGUSDT.P&#39;, &#39;DODOXUSDT.P&#39;, &#39;BNTUSDT.P&#39;, &#39;OXTUSDT.P&#39;, &#39;SEIUSDT.P&#39;, &#39;CYBERUSDT.P&#39;, <br>        &#39;HIFIUSDT.P&#39;, &#39;ARKUSDT.P&#39;, &#39;FRONTUSDT.P&#39;, &#39;GLMRUSDT.P&#39;, &#39;BICOUSDT.P&#39;, &#39;STRAXUSDT.P&#39;, &#39;LOOMUSDT.P&#39;, &#39;BIGTIMEUSDT.P&#39;, <br>        &#39;BONDUSDT.P&#39;, &#39;ORBSUSDT.P&#39;, &#39;STPTUSDT.P&#39;, &#39;WAXPUSDT.P&#39;, &#39;BSVUSDT.P&#39;, &#39;RIFUSDT.P&#39;, &#39;POLYXUSDT.P&#39;, &#39;GASUSDT.P&#39;, <br>        &#39;POWRUSDT.P&#39;, &#39;SLPUSDT.P&#39;, &#39;TIAUSDT.P&#39;, &#39;SNTUSDT.P&#39;, &#39;CAKEUSDT.P&#39;, &#39;MEMEUSDT.P&#39;, &#39;TWTUSDT.P&#39;, &#39;TOKENUSDT.P&#39;, <br>        &#39;ORDIUSDT.P&#39;, &#39;STEEMUSDT.P&#39;, &#39;BADGERUSDT.P&#39;, &#39;ILVUSDT.P&#39;, &#39;NTRNUSDT.P&#39;, &#39;MBLUSDT.P&#39;, &#39;KASUSDT.P&#39;, &#39;BEAMXUSDT.P&#39;, <br>        &#39;1000BONKUSDT.P&#39;, &#39;PYTHUSDT.P&#39;, &#39;SUPERUSDT.P&#39;, &#39;USTCUSDT.P&#39;, &#39;ONGUSDT.P&#39;, &#39;ETHWUSDT.P&#39;, &#39;JTOUSDT.P&#39;, &#39;1000SATSUSDT.P&#39;, <br>        &#39;AUCTIONUSDT.P&#39;, &#39;1000RATSUSDT.P&#39;, &#39;ACEUSDT.P&#39;, &#39;MOVRUSDT.P&#39;, &#39;NFPUSDT.P&#39;, &#39;AIUSDT.P&#39;, &#39;XAIUSDT.P&#39;, <br>        &#39;WIFUSDT.P&#39;, &#39;MANTAUSDT.P&#39;, &#39;ONDOUSDT.P&#39;, &#39;LSKUSDT.P&#39;, &#39;ALTUSDT.P&#39;, &#39;JUPUSDT.P&#39;, &#39;ZETAUSDT.P&#39;, &#39;RONINUSDT.P&#39;, <br>        &#39;DYMUSDT.P&#39;, &#39;OMUSDT.P&#39;, &#39;PIXELUSDT.P&#39;, &#39;STRKUSDT.P&#39;, &#39;MAVIAUSDT.P&#39;, &#39;GLMUSDT.P&#39;, &#39;PORTALUSDT.P&#39;, &#39;TONUSDT.P&#39;, <br>        &#39;AXLUSDT.P&#39;, &#39;MYROUSDT.P&#39;, &#39;METISUSDT.P&#39;, &#39;AEVOUSDT.P&#39;, &#39;VANRYUSDT.P&#39;, &#39;BOMEUSDT.P&#39;, &#39;ETHFIUSDT.P&#39;, &#39;ENAUSDT.P&#39;, <br>        &#39;WUSDT.P&#39;, &#39;TNSRUSDT.P&#39;, &#39;SAGAUSDT.P&#39;, &#39;TAOUSDT.P&#39;, &#39;OMNIUSDT.P&#39;, &#39;REZUSDT.P&#39;<br>]<br>nest_asyncio.apply()<br># Define data download function<br>async def download_data(symbol):<br>    try:<br>        data = tv.get_hist(symbol=symbol, exchange=&#39;BINANCE&#39;, interval=interval, n_bars=20000, extended_session=True)<br>        if not data.empty:<br>            # Convert Date objects to strings<br>            # data[&#39;Date&#39;] = data.index.date.astype(str)<br>            # data[&#39;Time&#39;] = data.index.time.astype(str)<br>            data[&#39;date&#39;] = data.index.astype(str)  # Add a new column for timestamps<br>            folder_name = f&quot;tradingview_crypto_assets_{timeframe}&quot;<br>            os.makedirs(folder_name, exist_ok=True)<br>            # Replace &quot;USDT.P&quot; with &quot;/USDT:USDT&quot; in the file name<br>            symbol_file_name = symbol.replace(&quot;USDT.P&quot;, &quot;&quot;) + &quot;.json&quot;<br>            file_name = os.path.join(folder_name, symbol_file_name)<br>            # Convert DataFrame to dictionary<br>            data_dict = data.to_dict(orient=&#39;records&#39;)<br>            with open(file_name, &quot;w&quot;) as file:<br>                # Serialize dictionary to JSON<br>                json.dump(data_dict, file)<br>            print(f&quot;Data for {symbol} downloaded and saved successfully.&quot;)<br>        else:<br>            print(f&quot;No data available for {symbol}.&quot;)<br>    except Exception as e:<br>        print(f&quot;Error occurred while downloading data for {symbol}: {e}&quot;)<br># Define main function to run async download tasks<br>async def main():<br>    tasks = [download_data(symbol) for symbol in data]<br>    await asyncio.gather(*tasks)<br># Run the main function<br>asyncio.run(main())</pre><p>This code snippet demonstrates how to download historical cryptocurrency data from TradingView for multiple assets using the tvDatafeed library. Here&#39;s a breakdown:</p><p><strong>Imports:</strong></p><ul><li>Includes libraries for asynchronous programming (asyncio), working with dates (datetime), data manipulation (pandas), and file handling (os, json).</li><li>Imports the TvDatafeed class from tvDatafeed for interacting with TradingView.</li></ul><p><strong>TvDatafeed Object:</strong></p><ul><li>Initializes a TvDatafeed object (tv) without username and password (assuming a free account). Paid accounts might require credentials.</li></ul><p><strong>Timeframe and Interval:</strong></p><ul><li>Sets the desired timeframe (timeframe) for data download (e.g., &quot;15m&quot; for 15-minute intervals).</li><li>Maps the timeframe to the corresponding Interval enumeration value using a series of if statements.</li></ul><p><strong>Symbols List:</strong></p><ul><li>Defines a long list of symbols (data) representing cryptocurrencies on Binance Futures with perpetual USDT contracts (identified by &quot;.P&quot; suffix).</li></ul><p><strong>Asynchronous Programming Setup:</strong></p><ul><li>Initializes nest_asyncio.apply() to enable the use of asynchronous functions within a non-asynchronous context.</li></ul><p><strong>Download Function:</strong></p><ul><li>Defines an asynchronous function download_data(symbol) that takes a symbol as input.</li></ul><p>Attempts to download historical data for the symbol using tv.get_hist:</p><ul><li>Specifies the symbol, exchange (“BINANCE”), interval, number of bars (20000), and extended session (to potentially capture pre-market/after-market data).</li><li>Checks if downloaded data (data) is not empty.</li></ul><p>If data is available:</p><ul><li>Converts the index (timestamps) to strings in a new column named “date”.</li><li>Creates a folder named tradingview_crypto_assets_{timeframe} to store the downloaded data (creates it if it doesn&#39;t exist).</li><li>Constructs the filename by replacing “.P” with “/USDT:USDT” in the symbol and appending “.json”.</li><li>Converts the DataFrame to a dictionary using to_dict(orient=&#39;records&#39;).</li><li>Saves the dictionary as JSON to the constructed filename.</li><li>Prints a success message.</li></ul><p>If no data is available:</p><ul><li>Prints a message indicating no data for the symbol.</li><li>Catches any exceptions (Exception) during download and prints an error message with the exception details.</li></ul><p><strong>Main Function:</strong></p><ul><li>Defines an asynchronous function main that:</li><li>Creates a list of asynchronous tasks (tasks) using list comprehension. Each task calls download_data for a symbol from the data list.</li><li>Uses asyncio.gather(*tasks) to run all download tasks concurrently.</li></ul><p><strong>Running the Download:</strong></p><ul><li>Uses asyncio.run(main()) to execute the asynchronous tasks within the main function.</li></ul><p><strong>Important Notes:</strong></p><ul><li>This code retrieves data for a large number of symbols. Downloading a significant amount of data might exceed free account limitations or take a long time. Consider rate limits and adjust accordingly.</li><li>The code assumes a specific symbol format with the “.P” suffix. You might need to modify it for different symbol formats.</li><li>Error handling can be improved by implementing specific checks for different exception types (e.g., network errors, API errors).</li></ul><h3>Hyperoptimization of Multiple Assets for Specific ML/DL Model:</h3><pre>from pandas import Timestamp<br><br># Define a function to process each JSON file<br>def process_json(file_path):<br>    # try:<br>    with open(file_path, &quot;r&quot;) as f:<br>        data = json.load(f)<br>    df = pd.DataFrame(data)<br>    df.rename(columns={&#39;date&#39;: &quot;Date&quot;, &#39;open&#39;: &quot;Open&quot;, &#39;high&#39;: &quot;High&quot;, &#39;low&#39;: &quot;Low&quot;, &#39;close&#39;: &quot;Adj Close&quot;, &#39;volume&#39;: &quot;Volume&quot;}, inplace=True)<br>    df[&quot;Date&quot;] = pd.to_datetime(df[&#39;Date&#39;])<br>    df.set_index(&quot;Date&quot;, inplace=True)<br>    df[&#39;Close&#39;] = df[&#39;Adj Close&#39;]<br>    symbol_name = df[&#39;symbol&#39;].iloc[0]  # Assuming all rows have the same symbol<br>    symbol_name = symbol_name.replace(&quot;BINANCE:&quot;, &quot;&quot;)<br>    symbol_name = symbol_name.replace(&quot;USDT.P&quot;, &quot;/USDT:USDT&quot;)<br>    df.drop(columns=[&#39;symbol&#39;], inplace=True)<br>    target_prediction_number = 2<br>    time_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br>    name_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br>    new_columns = []<br>    for period in time_periods:<br>        for nperiod in name_periods:<br>            df[f&#39;ATR_{period}&#39;] = ta.ATR(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;EMA_{period}&#39;] = ta.EMA(df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;RSI_{period}&#39;] = ta.RSI(df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;VWAP_{period}&#39;] = ta.SUM(df[&#39;Volume&#39;] * (df[&#39;High&#39;] + df[&#39;Low&#39;] + df[&#39;Close&#39;]) / 3, timeperiod=period) / ta.SUM(df[&#39;Volume&#39;], timeperiod=period)<br>            df[f&#39;ROC_{period}&#39;] = ta.ROC(df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;KC_upper_{period}&#39;] = ta.EMA(df[&#39;High&#39;], timeperiod=period)<br>            df[f&#39;KC_middle_{period}&#39;] = ta.EMA(df[&#39;Low&#39;], timeperiod=period)<br>            df[f&#39;Donchian_upper_{period}&#39;] = ta.MAX(df[&#39;High&#39;], timeperiod=period)<br>            df[f&#39;Donchian_lower_{period}&#39;] = ta.MIN(df[&#39;Low&#39;], timeperiod=period)<br>            macd, macd_signal, _ = ta.MACD(df[&#39;Close&#39;], fastperiod=(period + 12), slowperiod=(period + 26), signalperiod=(period + 9))<br>            df[f&#39;MACD_{period}&#39;] = macd<br>            df[f&#39;MACD_signal_{period}&#39;] = macd_signal<br>            bb_upper, bb_middle, bb_lower = ta.BBANDS(df[&#39;Close&#39;], timeperiod=period, nbdevup=2, nbdevdn=2)<br>            df[f&#39;BB_upper_{period}&#39;] = bb_upper<br>            df[f&#39;BB_middle_{period}&#39;] = bb_middle<br>            df[f&#39;BB_lower_{period}&#39;] = bb_lower<br>            df[f&#39;EWO_{period}&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=(period+5)) - ta.SMA(df[&#39;Close&#39;], timeperiod=(period+35))<br>    df[&quot;Returns&quot;] = (df[&quot;Adj Close&quot;] / df[&quot;Adj Close&quot;].shift(target_prediction_number)) - 1<br>    df[&quot;Range&quot;] = (df[&quot;High&quot;] / df[&quot;Low&quot;]) - 1<br>    df[&quot;Volatility&quot;] = df[&#39;Returns&#39;].rolling(window=target_prediction_number).std()<br>    # Volume-Based Indicators<br>    df[&#39;OBV&#39;] = ta.OBV(df[&#39;Close&#39;], df[&#39;Volume&#39;])<br>    df[&#39;ADL&#39;] = ta.AD(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], df[&#39;Volume&#39;])<br><br>    # Momentum-Based Indicators<br>    df[&#39;Stoch_Oscillator&#39;] = ta.STOCH(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;])[0]<br>    df[&#39;PSAR&#39;] = ta.SAR(df[&#39;High&#39;], df[&#39;Low&#39;], acceleration=0.02, maximum=0.2)<br>    # More feature engineering...<br>    timeframe_diff = df.index[-1] - df.index[-2]<br>    timeframe = None<br>    # Define timeframe based on time difference<br>    if timeframe_diff == pd.Timedelta(minutes=1):<br>        timeframe = &#39;1m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=3):<br>        timeframe = &#39;3m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=5):<br>        timeframe = &#39;5m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=15):<br>        timeframe = &#39;15m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=30):<br>        timeframe = &#39;30m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=45):<br>        timeframe = &#39;45m&#39;<br>    elif timeframe_diff == pd.Timedelta(hours=1):<br>        timeframe = &#39;1h&#39;<br>    elif timeframe_diff == pd.Timedelta(days=1):<br>        timeframe = &#39;1d&#39;<br>    elif timeframe_diff == pd.Timedelta(weeks=1):<br>        timeframe = &#39;1w&#39;<br>    else:<br>        timeframe = &#39;Not sure&#39;<br>        <br>    # print(&#39;timeframe is - &#39;, timeframe)<br>    # Remove rows containing inf or nan values<br>    df.dropna(inplace=True)<br>    # Scaling<br>    scaler = MinMaxScaler(feature_range=(0,1))<br>    X = df.copy()<br>    X_scale = scaler.fit_transform(X)<br><br>    # Define a function to reshape the data<br>    def reshape_data(data, time_steps):<br>        samples = len(data) - time_steps + 1<br>        reshaped_data = np.zeros((samples, time_steps, data.shape[1]))<br>        for i in range(samples):<br>            reshaped_data[i] = data[i:i + time_steps]<br>        return reshaped_data<br>    # Reshape the scaled X data<br>    time_steps = 1  # Adjust the number of time steps as needed<br>    X_reshaped = reshape_data(X_scale, time_steps)<br>    # Now X_reshaped has the desired three-dimensional shape: (samples, time_steps, features)<br>    # Each sample contains scaled data for a specific time window<br>    X = X_reshaped<br>    # Use the loaded model to predict on the entire dataset<br>    df_ens = df.copy() <br>    # df_ens[&#39;voting_classifier_ensembel_with_scale&#39;] = np.argmax(model.predict(X), axis=1)<br>    df_ens[&#39;voting_classifier_ensembel_with_scale&#39;] = np.argmax(model.predict(X), axis=2)<br>    df_ens[&#39;vcews&#39;] = df_ens[&#39;voting_classifier_ensembel_with_scale&#39;].shift(0).dropna().astype(int)<br>    df_ens = df_ens.dropna()<br>    # Backtesting<br>    df_ens = df_ens.reset_index(inplace=False)<br>    df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>    df_ens.set_index(&#39;Date&#39;, inplace=True)<br>    best_params = {&#39;Optimizer&#39;: &#39;Return [%]&#39;,<br>        &#39;model_trained_on&#39;: model_name,<br>        &#39;OptimizerResult_Cross&#39;: 617.5341106880867,<br>        &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_long&#39;: 24,<br>        &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_short&#39;: 24,<br>        &#39;BEST_LEVERAGE_margin_leverage&#39;: 1,<br>        &#39;TRAILING_ACTIVATE_PCT&#39;: 10,<br>        &#39;TRAILING_STOP_PCT&#39; : 5,<br>        &#39;roi_at_50&#39; : 24,<br>        &#39;roi_at_100&#39; : 20,<br>        &#39;roi_at_150&#39; : 18,<br>        &#39;roi_at_200&#39; : 15,<br>        &#39;roi_at_300&#39; : 13,<br>        &#39;roi_at_500&#39; : 10}<br>    # Define SIGNAL_3 function<br>    def SIGNAL_3(df_ens):<br>        return df_ens[&#39;vcews&#39;]<br>    # Define MyCandlesStrat_3 class<br>    class MyCandlesStrat_3(Strategy):  <br>        sl_pct_l = best_params[&#39;BEST_STOP_LOSS_sl_pct_long&#39;] <br>        tp_pct_l = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_long&#39;] <br>        limit_l = best_params[&#39;BEST_LIMIT_ORDER_limit_long&#39;] <br>        sl_pct_s = best_params[&#39;BEST_STOP_LOSS_sl_pct_short&#39;] <br>        tp_pct_s = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_short&#39;] <br>        limit_s = best_params[&#39;BEST_LIMIT_ORDER_limit_short&#39;] <br>        margin_leverage = best_params[&#39;BEST_LEVERAGE_margin_leverage&#39;]<br>        TRAILING_ACTIVATE_PCT = best_params[&#39;TRAILING_ACTIVATE_PCT&#39;]<br>        TRAILING_STOP_PCT = best_params[&#39;TRAILING_STOP_PCT&#39;]<br>        roi_at_50 = best_params[&#39;roi_at_50&#39;]<br>        roi_at_100 = best_params[&#39;roi_at_100&#39;]<br>        roi_at_150 = best_params[&#39;roi_at_150&#39;]<br>        roi_at_200 = best_params[&#39;roi_at_200&#39;]<br>        roi_at_300 = best_params[&#39;roi_at_300&#39;]<br>        roi_at_500 = best_params[&#39;roi_at_500&#39;]<br>        def init(self):<br>            super().init()<br>            self.signal1 = self.I(SIGNAL_3, self.data)<br>            self.entry_time = Timestamp.now()<br>            self.max_profit = 0<br>        def next(self):<br>            super().next() <br>            if (self.signal1 == 1):<br>                <br>                sl_price = self.data.Close[-1] * (1 - (self.sl_pct_l * 0.001))<br>                tp_price = self.data.Close[-1] * (1 + (self.tp_pct_l * 0.001))<br>                limit_price_l = tp_price * 0.994<br>                self.position.is_long<br>                self.buy(sl=sl_price, limit=limit_price_l, tp=tp_price)<br>                <br>                if self.position.is_long:<br>                    self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                <br>                # Calculate current profit<br>                # current_profit = self.trades[0].pl_pct<br>                # Check for trailing stop loss based on current profit<br>                if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                    self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                    trailing_stop_price = self.trades[0].entry_price * (1 + (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                    sl_price = min((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                    time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br>                    # Check for time interval-based selling<br>                    if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                        self.position.close()<br>            elif (self.signal1 == 2):<br>                <br>                sl_price = self.data.Close[-1] * (1 + (self.sl_pct_s * 0.001))<br>                tp_price = self.data.Close[-1] * (1 - (self.tp_pct_s * 0.001))<br>                limit_price_s = tp_price * 1.004<br>                self.position.is_short<br>                self.sell(sl=sl_price, limit=limit_price_s, tp=tp_price)<br>                <br>                if self.position.is_short:<br>                    self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                <br>                # Calculate current profit<br>                # current_profit = self.trades[0].pl_pct<br>                # Check for trailing stop loss based on current profit<br>                if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                    self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                    trailing_stop_price = self.trades[0].entry_price * (1 - (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                    sl_price = max((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                    time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br>                # Check for time interval-based selling<br>                if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                    self.position.close()<br>                elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                    self.position.close()<br>                elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                    self.position.close()<br><br>    # Run backtest<br>    bt_3 = Backtest(df_ens, MyCandlesStrat_3, cash=100000, commission=.001, margin= (1/MyCandlesStrat_3.margin_leverage), exclusive_orders=False)<br>    stat_3 = bt_3.run()<br>    print(&quot;backtest one done at 226 line - &quot;, stat_3)<br>    # custom_assets = {}<br>    if ((stat_3[&#39;Return [%]&#39;] &gt; (stat_3[&#39;Buy &amp; Hold Return [%]&#39;] * 3)) <br>        &amp; (stat_3[&#39;Profit Factor&#39;] &gt; 1.0) <br>        &amp; (stat_3[&#39;Max. Drawdown [%]&#39;] &gt; -40)<br>        &amp; (stat_3[&#39;Win Rate [%]&#39;] &gt; 55)<br>        &amp; (stat_3[&#39;Return [%]&#39;] &gt; 0)):<br>        file_prefix = file_path.split(&#39;/&#39;)[-1].split(&#39;.&#39;)[0]<br>        <br>        best_params = {&#39;Optimizer&#39;: &#39;1st backtest - Expectancy&#39;,<br>                       &#39;model_trained_on&#39;: model_name,<br>        &#39;OptimizerResult_Cross&#39;: f&quot;For {file_prefix}/USDT:USDT backtest was done from {stat_3[&#39;Start&#39;]} upto {stat_3[&#39;End&#39;]} for a duration of {stat_3[&#39;Duration&#39;]} using time frame of {timeframe} with Win Rate % - {round(stat_3[&#39;Win Rate [%]&#39;],2)}, Return % - {round(stat_3[&#39;Return [%]&#39;],3)},Expectancy % - {round(stat_3[&#39;Expectancy [%]&#39;],5)} and Sharpe Ratio - {round(stat_3[&#39;Sharpe Ratio&#39;],4)}.&quot;,<br>        &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_long&#39;: 24,<br>        &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_short&#39;: 24,<br>        &#39;BEST_LEVERAGE_margin_leverage&#39;: 1,<br>        &#39;TRAILING_ACTIVATE_PCT&#39;: 10,<br>        &#39;TRAILING_STOP_PCT&#39; : 5,<br>        &#39;roi_at_50&#39; : 24,<br>        &#39;roi_at_100&#39; : 20,<br>        &#39;roi_at_150&#39; : 18,<br>        &#39;roi_at_200&#39; : 15,<br>        &#39;roi_at_300&#39; : 13,<br>        &#39;roi_at_500&#39; : 10}<br>        key_mapping = {<br>            &#39;Optimizer&#39;: &#39;Optimizer_used&#39;,<br>            &#39;model_trained_on&#39;: &#39;model_name&#39;,<br>            &#39;OptimizerResult_Cross&#39;: &#39;Optimizer_result&#39;,<br>            &#39;BEST_STOP_LOSS_sl_pct_long&#39;: &#39;stop_loss_percent_long&#39;,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: &#39;take_profit_percent_long&#39;,<br>            &#39;BEST_LIMIT_ORDER_limit_long&#39;: &#39;limit_long&#39;,<br>            &#39;BEST_STOP_LOSS_sl_pct_short&#39;: &#39;stop_loss_percent_short&#39;,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: &#39;take_profit_percent_short&#39;,<br>            &#39;BEST_LIMIT_ORDER_limit_short&#39;: &#39;limit_short&#39;,<br>            &#39;BEST_LEVERAGE_margin_leverage&#39;: &#39;margin_leverage&#39;,<br>            &#39;TRAILING_ACTIVATE_PCT&#39;: &#39;TRAILING_ACTIVATE_PCT&#39;,<br>            &#39;TRAILING_STOP_PCT&#39; : &#39;TRAILING_STOP_PCT&#39;,<br>            &#39;roi_at_50&#39; : &#39;roi_at_50&#39;,<br>            &#39;roi_at_100&#39; : &#39;roi_at_100&#39;,<br>            &#39;roi_at_150&#39; :&#39;roi_at_150&#39;,<br>            &#39;roi_at_200&#39; : &#39;roi_at_200&#39;,<br>            &#39;roi_at_300&#39; : &#39;roi_at_300&#39;,<br>            &#39;roi_at_500&#39; : &#39;roi_at_500&#39;<br>        }<br>        custom_assets = load_custom_assets()<br>        transformed_params = {}<br>        for old_key, value in best_params.items():<br>            new_key = key_mapping.get(old_key, old_key)<br>            transformed_params[new_key] = value<br>        new_key = file_prefix + &quot;/USDT:USDT&quot;<br>        # custom_assets[new_key] = transformed_params<br>        # Update or add new entry to custom_assets<br>        if new_key in custom_assets:<br>            # Update existing entry<br>            for key, value in transformed_params.items():<br>                if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                    transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>            custom_assets[new_key].update(transformed_params)<br>        else:<br>            # Add new entry<br>            # Multiply numerical values by 0.001 for new entry if value &gt; 1<br>            for key, value in transformed_params.items():<br>                if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                    transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>            custom_assets[new_key] = transformed_params<br>        <br>        # Save custom_assets to JSON file<br>        save_custom_assets(custom_assets)<br>        print(custom_assets)<br>    else:<br>        # Optimization<br>        def optimize_strategy():<br>            # Optimization Params<br>            optimizer = &#39;Win Rate [%]&#39;<br>            stats = bt_3.optimize(<br>                sl_pct_l = range(6,100, 2), # (5,10,15,20,25,30,40,50,75,100)<br>                tp_pct_l =  range(40,100, 2), # (0.005, 0.01, 0.015, 0.02, 0.025, 0.03, 0.04, 0.05, 0.075, 0.1)<br>                # limit_l =  (4,9,14,19,24,29,39,49,74,90),#  (0.004, 0.009, 0.014, 0.019, 0.024, 0.029, 0.039, 0.049, 0.074, 0.09)<br>                sl_pct_s = range(6,100, 2),<br>                tp_pct_s =  range(40,100, 2),<br>                # limit_s =  (4,9,14,19,24,29,39,49,74,90),<br>                margin_leverage = range(1, 8),<br>                TRAILING_ACTIVATE_PCT = range(6,100,2),<br>                TRAILING_STOP_PCT = range(6,100,2),<br>                roi_at_50 = range(6,100,2),<br>                roi_at_100 = range(6,100,2),<br>                roi_at_150 = range(6,100,2),<br>                roi_at_200 = range(6,100,2),<br>                roi_at_300 = range(6,100,2),<br>                roi_at_500 = range(6,100,2),<br>                constraint=lambda p: ( (p.sl_pct_l &gt; (p.tp_pct_l) ) and <br>                                      ((p.sl_pct_s) &gt; (p.tp_pct_s)) and <br>                                      (p.roi_at_50 &gt; p.roi_at_100) and (p.roi_at_100 &gt; p.roi_at_150) and <br>                                      (p.roi_at_150 &gt; p.roi_at_200) and (p.roi_at_200 &gt; p.roi_at_300) and (p.roi_at_300 &gt; p.roi_at_500) and<br>                                     (p.TRAILING_ACTIVATE_PCT &gt; p.TRAILING_STOP_PCT)),<br>                maximize = optimizer,<br>                return_optimization=True,<br>                method = &#39;skopt&#39;,<br>                max_tries = 120 # 20% for 0.2 and 100% for 1.0, this applys when not using &#39;skopt&#39; method, for &#39;skopt&#39; number starts from 1 to 200 max epochs <br>            )<br>            # Extract the optimization results<br>            best_params = {<br>                &#39;Optimizer&#39;: optimizer,<br>                &#39;model_trained_on&#39;: model_name,<br>                &#39;OptimizerResult_Cross&#39;: stats[0][optimizer],<br>                &#39;BEST_STOP_LOSS_sl_pct_long&#39;: stats[1].x[0],<br>                &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: stats[1].x[1] ,<br>                &#39;BEST_LIMIT_ORDER_limit_long&#39;: stats[1].x[1] * 0.997,<br>                &#39;BEST_STOP_LOSS_sl_pct_short&#39;: stats[1].x[2] ,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: stats[1].x[3] ,<br>                &#39;BEST_LIMIT_ORDER_limit_short&#39;: stats[1].x[3] * 0.997,<br>                &#39;BEST_LEVERAGE_margin_leverage&#39;: stats[1].x[4],<br>                &#39;TRAILING_ACTIVATE_PCT&#39;: stats[1].x[5],<br>                &#39;TRAILING_STOP_PCT&#39; : stats[1].x[6],<br>                &#39;roi_at_50&#39; : stats[1].x[7],<br>                &#39;roi_at_100&#39; : stats[1].x[8],<br>                &#39;roi_at_150&#39; : stats[1].x[9],<br>                &#39;roi_at_200&#39; : stats[1].x[10],<br>                &#39;roi_at_300&#39; : stats[1].x[11],<br>                &#39;roi_at_500&#39; : stats[1].x[12]<br>                # &#39;BEST_STOP_LOSS_sl_pct_long&#39;: stats._strategy.sl_pct_l,<br>                # &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: stats._strategy.tp_pct_l,<br>                # &#39;BEST_LIMIT_ORDER_limit_long&#39;: stats._strategy.tp_pct_l * 0.998,<br>                # &#39;BEST_STOP_LOSS_sl_pct_short&#39;: stats._strategy.sl_pct_s,<br>                # &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: stats._strategy.tp_pct_s,<br>                # &#39;BEST_LIMIT_ORDER_limit_short&#39;: stats._strategy.sl_pct_s * 0.998,<br>                # &#39;BEST_LEVERAGE_margin_leverage&#39;: stats._strategy.margin_leverage<br>            }<br>            <br>            return best_params<br><br>        # Obtain best parameters<br>        best_params = optimize_strategy()<br>        print(&quot;best_params line 322 &quot;, best_params)<br>        if best_params:<br>            print(best_params)<br>        else:<br>            best_params = {&#39;Optimizer&#39;: &#39;Return [%]&#39;,<br>                           &#39;model_trained_on&#39;: model_name,<br>            &#39;OptimizerResult_Cross&#39;: 617.5341106880867,<br>            &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 0.025,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 0.025,<br>            &#39;BEST_LIMIT_ORDER_limit_long&#39;: 0.024,<br>            &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 0.025,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 0.025,<br>            &#39;BEST_LIMIT_ORDER_limit_short&#39;: 0.024,<br>            &#39;TRAILING_ACTIVATE_PCT&#39;: 10,<br>            &#39;TRAILING_STOP_PCT&#39; : 5,<br>            &#39;roi_at_50&#39; : 24,<br>            &#39;roi_at_100&#39; : 2,<br>            &#39;roi_at_150&#39; : 18,<br>            &#39;roi_at_200&#39; : 15,<br>            &#39;roi_at_300&#39; : 13,<br>            &#39;roi_at_500&#39; : 10}<br>        # Define SIGNAL_11 function<br>        def SIGNAL_11(df_ens):<br>            return df_ens[&#39;vcews&#39;]<br>        # Define MyCandlesStrat_11 class<br>        class MyCandlesStrat_11(Strategy):  <br>            sl_pct_l = best_params[&#39;BEST_STOP_LOSS_sl_pct_long&#39;]<br>            tp_pct_l = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_long&#39;]<br>            limit_l = best_params[&#39;BEST_LIMIT_ORDER_limit_long&#39;]<br>            sl_pct_s = best_params[&#39;BEST_STOP_LOSS_sl_pct_short&#39;]<br>            tp_pct_s = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_short&#39;]<br>            limit_s = best_params[&#39;BEST_LIMIT_ORDER_limit_short&#39;]<br>            margin_leverage = best_params[&#39;BEST_LEVERAGE_margin_leverage&#39;]<br>            TRAILING_ACTIVATE_PCT = best_params[&#39;TRAILING_ACTIVATE_PCT&#39;]<br>            TRAILING_STOP_PCT = best_params[&#39;TRAILING_STOP_PCT&#39;]<br>            roi_at_50 = best_params[&#39;roi_at_50&#39;]<br>            roi_at_100 = best_params[&#39;roi_at_100&#39;]<br>            roi_at_150 = best_params[&#39;roi_at_150&#39;]<br>            roi_at_200 = best_params[&#39;roi_at_200&#39;]<br>            roi_at_300 = best_params[&#39;roi_at_300&#39;]<br>            roi_at_500 = best_params[&#39;roi_at_500&#39;]<br>            def init(self):<br>                super().init()<br>                self.signal1 = self.I(SIGNAL_11, self.data)<br>                self.entry_time = Timestamp.now()<br>                self.max_profit = 0<br>            def next(self):<br>                super().next() <br>                if (self.signal1 == 1):<br>                    sl_price = self.data.Close[-1] * (1 - (self.sl_pct_l * 0.001))<br>                    tp_price = self.data.Close[-1] * (1 + (self.tp_pct_l * 0.001))<br>                    limit_price_l = tp_price * 0.994<br>                    self.position.is_long<br>                    self.buy(sl=sl_price, limit=limit_price_l, tp=tp_price)<br>                    if self.position.is_long:<br>                        self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                    # Calculate current profit<br>                    # current_profit = self.trades[0].pl_pct<br>                    # Check for trailing stop loss based on current profit<br>                    if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                        self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                        trailing_stop_price = self.trades[0].entry_price * (1 + (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                        sl_price = min((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                    # time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br>                    # Check for time interval-based selling<br>                    if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                        self.position.close()<br>                elif (self.signal1 == 2):<br>                    sl_price = self.data.Close[-1] * (1 + (self.sl_pct_s * 0.001))<br>                    tp_price = self.data.Close[-1] * (1 - (self.tp_pct_s * 0.001))<br>                    limit_price_s = tp_price * 1.004<br>                    self.position.is_short<br>                    self.sell(sl=sl_price, limit=limit_price_s, tp=tp_price)<br>                    if self.position.is_short:<br>                        self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                    # Calculate current profit<br>                    # current_profit = self.trades[0].pl_pct<br>                    # Check for trailing stop loss based on current profit<br>                    if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                        self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                        trailing_stop_price = self.trades[0].entry_price * (1 - (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                        sl_price = max((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                        time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br>                    # Check for time interval-based selling<br>                    if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                        self.position.close()<br><br>        # Run backtest with optimized parameters<br>        bt_11 = Backtest(df_ens, MyCandlesStrat_11, cash=100000, commission=.001, margin=(1 / MyCandlesStrat_11.margin_leverage), exclusive_orders=False)<br>        stat_11 = bt_11.run()<br>        print(&quot;stat_11 line 388 - &quot;, stat_11)<br>        # Additional processing for custom_assets<br>        # custom_assets = {}<br>        if ((stat_11[&#39;Return [%]&#39;] &gt; (stat_11[&#39;Buy &amp; Hold Return [%]&#39;] * 3)) <br>            &amp; (stat_11[&#39;Profit Factor&#39;] &gt; 1.0)<br>            &amp; (stat_11[&#39;Max. Drawdown [%]&#39;] &gt; -35)<br>            &amp; (stat_11[&#39;Win Rate [%]&#39;] &gt; 52)<br>            &amp; (stat_11[&#39;Return [%]&#39;] &gt; 0)):<br>            file_prefix = file_path.split(&#39;/&#39;)[-1].split(&#39;.&#39;)[0]<br>            <br>            print(f&quot;second backtest success for {file_prefix}/USDT:USDT with Win Rate % of {stat_11[&#39;Win Rate [%]&#39;]} and with Return in % of {stat_11[&#39;Return [%]&#39;]}&quot; )<br>            <br>            <br>            best_params = {&#39;Optimizer&#39;: &#39;2nd backtest with Expectancy&#39;,<br>            # &#39;OptimizerResult_Cross&#39;: f&quot;2nd backtest, Sharpe Ratio - {stat_11[&#39;Sharpe Ratio&#39;]}, Returns % - {stat_11[&#39;Return [%]&#39;]}, Win Rate % - {stat_11[&#39;Win Rate [%]&#39;]}&quot;,<br>                           &#39;model_trained_on&#39;: model_name,<br>            &#39;OptimizerResult_Cross&#39;: f&quot;For {file_prefix}/USDT:USDT backtest was done from {stat_11[&#39;Start&#39;]} upto {stat_11[&#39;End&#39;]} for a duration of {stat_11[&#39;Duration&#39;]} using time frame of {timeframe} with Win Rate % - {round(stat_11[&#39;Win Rate [%]&#39;],2)}, Return % - {round(stat_11[&#39;Return [%]&#39;],3)}, Expectancy % - {round(stat_11[&#39;Expectancy [%]&#39;],5)} and Sharpe Ratio - {round(stat_11[&#39;Sharpe Ratio&#39;],3)}.&quot;,<br>            &#39;BEST_STOP_LOSS_sl_pct_long&#39;: MyCandlesStrat_11.sl_pct_l.tolist(),<br>            &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: MyCandlesStrat_11.tp_pct_l.tolist(),<br>            &#39;BEST_LIMIT_ORDER_limit_long&#39;:  round(MyCandlesStrat_11.tp_pct_l.tolist() * 0.996, 2),<br>            &#39;BEST_STOP_LOSS_sl_pct_short&#39;: MyCandlesStrat_11.sl_pct_s.tolist(),<br>            &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: MyCandlesStrat_11.tp_pct_s.tolist(),<br>            &#39;BEST_LIMIT_ORDER_limit_short&#39;: round(MyCandlesStrat_11.sl_pct_s.tolist() * 0.996,2),<br>            &#39;BEST_LEVERAGE_margin_leverage&#39;: MyCandlesStrat_11.margin_leverage.tolist(),<br>            &#39;TRAILING_ACTIVATE_PCT&#39;: MyCandlesStrat_11.TRAILING_ACTIVATE_PCT.tolist(),<br>            &#39;TRAILING_STOP_PCT&#39; : MyCandlesStrat_11.TRAILING_STOP_PCT.tolist(),<br>            &#39;roi_at_50&#39; : MyCandlesStrat_11.roi_at_50.tolist(),<br>            &#39;roi_at_100&#39; : MyCandlesStrat_11.roi_at_100.tolist(),<br>            &#39;roi_at_150&#39; :MyCandlesStrat_11.roi_at_150.tolist(),<br>            &#39;roi_at_200&#39; : MyCandlesStrat_11.roi_at_200.tolist(),<br>            &#39;roi_at_300&#39; : MyCandlesStrat_11.roi_at_300.tolist(),<br>            &#39;roi_at_500&#39; : MyCandlesStrat_11.roi_at_500.tolist()<br>                          }<br>            <br>            # print(&quot;best_params under stat_11 &quot;, best_params)<br>            key_mapping = {<br>                &#39;Optimizer&#39;: &#39;Optimizer_used&#39;,<br>                &#39;model_trained_on&#39;: &#39;model_name&#39;,<br>                &#39;OptimizerResult_Cross&#39;: &#39;Optimizer_result&#39;,<br>                &#39;BEST_STOP_LOSS_sl_pct_long&#39;: &#39;stop_loss_percent_long&#39;,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: &#39;take_profit_percent_long&#39;,<br>                &#39;BEST_LIMIT_ORDER_limit_long&#39;: &#39;limit_long&#39;,<br>                &#39;BEST_STOP_LOSS_sl_pct_short&#39;: &#39;stop_loss_percent_short&#39;,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: &#39;take_profit_percent_short&#39;,<br>                &#39;BEST_LIMIT_ORDER_limit_short&#39;: &#39;limit_short&#39;,<br>                &#39;BEST_LEVERAGE_margin_leverage&#39;: &#39;margin_leverage&#39;,<br>                &#39;TRAILING_ACTIVATE_PCT&#39;: &#39;TRAILING_ACTIVATE_PCT&#39;,<br>                &#39;TRAILING_STOP_PCT&#39; : &#39;TRAILING_STOP_PCT&#39;,<br>                &#39;roi_at_50&#39; : &#39;roi_at_50&#39;,<br>                &#39;roi_at_100&#39; : &#39;roi_at_100&#39;,<br>                &#39;roi_at_150&#39; :&#39;roi_at_150&#39;,<br>                &#39;roi_at_200&#39; : &#39;roi_at_200&#39;,<br>                &#39;roi_at_300&#39; : &#39;roi_at_300&#39;,<br>                &#39;roi_at_500&#39; : &#39;roi_at_500&#39;<br>            }<br>            # Update or add new entry to custom_assets<br>            custom_assets = load_custom_assets()<br>            <br>            transformed_params = {}<br>            for old_key, value in best_params.items():<br>                new_key = key_mapping.get(old_key, old_key)<br>                transformed_params[new_key] = value<br>            new_key = file_prefix + &quot;/USDT:USDT&quot;<br>            # custom_assets[new_key] = transformed_params<br>            if new_key in custom_assets:<br>                # Update existing entry<br>                for key, value in transformed_params.items():<br>                    if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                        transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>                custom_assets[new_key].update(transformed_params)<br>            else:<br>                # Add new entry<br>                # Multiply numerical values by 0.001 for new entry if value &gt; 1<br>                for key, value in transformed_params.items():<br>                    if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                        transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>                custom_assets[new_key] = transformed_params<br>            # Save custom_assets to JSON file<br>            save_custom_assets(custom_assets)<br>        print(&quot;custom_assets after save &quot;, custom_assets)<br>    return df, symbol_name, custom_assets<br>    # except Exception as e:<br>    #     # Print the error message<br>    #     print(f&quot;Error processing {file_path}: {e}&quot;)<br>    #     print(&quot;custom assets at error level line 361 &quot;, custom_assets)<br>    #     # Return None for both DataFrame and symbol name to indicate failure<br>    #     return None, symbol_name, custom_assets<br><br># Define a thread worker function<br>def thread_worker(file):<br>    result = process_json(file)<br>    return result<br>def main():<br>    # Get a list of all JSON files in the folder<br>    # NOTE: make sure to mention the tradingview downloaded data folder here<br>    json_files = [f&quot;./tradingview_crypto_assets_15m/{file}&quot; for file in os.listdir(&quot;./tradingview_crypto_assets_15m/&quot;) if file.endswith(&quot;.json&quot;)]<br>    # print(json_files)<br>    # Get the number of available CPU cores<br>    num_cores = os.cpu_count()<br>    # print(num_cores)<br>    # Set the max_workers parameter based on the number of CPU cores<br>    max_workers = (num_cores) if (num_cores &gt; 1) else 1  # Default to 1 if CPU count cannot be determined<br>    # max_workers = 1  # Default to 1 if CPU count cannot be determined<br>    print(&#39;max workers (Total Number of CPU cores to be used) - &#39;, max_workers)<br>    # Process JSON files in parallel using multi-core processing<br>    with ThreadPoolExecutor(max_workers=max_workers) as executor:<br>        # Submit threads for each JSON file<br>        futures = [executor.submit(thread_worker, file) for file in json_files]<br>    # Wait for all threads to complete<br>    results = [future.result() for future in futures]<br>    # Process the results as needed<br>    for result in results:<br>        if result is None:<br>            continue<br>        df, symbol_name, custom_assets = result<br>        print(f&quot;Processed {symbol_name}&quot;)<br>        print(f&#39;custom_assets &#39;, custom_assets)<br>        if custom_data:  # Check if custom_data is not None<br>            custom_assets.update(custom_data)<br>            <br># Define a function to continuously run the loop<br>def run_continuous_loop():<br>    while True:<br>        main()<br># Start the continuous loop in a separate thread<br>thread = threading.Thread(target=run_continuous_loop)<br>thread.start()</pre><pre>output:<br>max workers (Total Number of CPU cores to be used) -  4<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   85.237208<br>Equity Final [$]                  45917.74697<br>Equity Peak [$]                  119511.93047<br>Return [%]                         -54.082253<br>Buy &amp; Hold Return [%]              -27.134777<br>Return (Ann.) [%]                  -98.222272<br>Volatility (Ann.) [%]                3.390676<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -63.780594<br>Avg. Drawdown [%]                   -7.944307<br>Max. Drawdown Duration       65 days 12:15:00<br>Avg. Drawdown Duration        6 days 13:06:00<br># Trades                                  704<br>Win Rate [%]                        42.471591<br>Best Trade [%]                       7.078622<br>Worst Trade [%]                     -5.342172<br>Avg. Trade [%]                      -0.100692<br>Max. Trade Duration           0 days 16:00:00<br>Avg. Trade Duration           0 days 02:09:00<br>Profit Factor                        0.910244<br>Expectancy [%]                      -0.083294<br>SQN                                 -1.448338<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                          Size  Ent...<br>dtype: object</pre><pre>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   66.332234<br>Equity Final [$]                 71540.222404<br>Equity Peak [$]                 104167.687582<br>Return [%]                         -28.459778<br>Buy &amp; Hold Return [%]               -7.367375<br>Return (Ann.) [%]                  -84.223673<br>Volatility (Ann.) [%]               22.572365<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                   -51.37295<br>Avg. Drawdown [%]                  -27.548884<br>Max. Drawdown Duration       72 days 04:15:00<br>Avg. Drawdown Duration       36 days 07:23:00<br># Trades                                  194<br>Win Rate [%]                        67.525773<br>Best Trade [%]                       5.416101<br>Worst Trade [%]                     -5.321366<br>Avg. Trade [%]                       0.213571<br>Max. Trade Duration           3 days 06:15:00<br>Avg. Trade Duration           0 days 09:34:00<br>Profit Factor                        1.200713<br>Expectancy [%]                       0.280423<br>SQN                                 -0.892377<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                        Size  Entry...<br>dtype: object<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   91.299986<br>Equity Final [$]                  130059.0093<br>Equity Peak [$]                  134623.59371<br>Return [%]                          30.059009<br>Buy &amp; Hold Return [%]              -22.347518<br>Return (Ann.) [%]                  201.316051<br>Volatility (Ann.) [%]              304.038082<br>Sharpe Ratio                         0.662141<br>Sortino Ratio                        3.673322<br>Calmar Ratio                         7.803374<br>Max. Drawdown [%]                  -25.798591<br>Avg. Drawdown [%]                   -3.662406<br>Max. Drawdown Duration       40 days 01:45:00<br>Avg. Drawdown Duration        2 days 09:32:00<br># Trades                                  267<br>Win Rate [%]                        68.539326<br>Best Trade [%]                       5.435705<br>Worst Trade [%]                     -5.361912<br>Avg. Trade [%]                       0.352395<br>Max. Trade Duration           2 days 23:30:00<br>Avg. Trade Duration           0 days 10:46:00<br>Profit Factor                        1.355785<br>Expectancy [%]                       0.405569<br>SQN                                  0.691026<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                        Size  Entry...<br>dtype: object<br>{&#39;QNT/USDT:USDT&#39;: {&#39;Optimizer_used&#39;: &#39;1st backtest - Expectancy&#39;, &#39;model_name&#39;: &#39;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&#39;, &#39;Optimizer_result&#39;: &#39;For QNT/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 68.54, Return % - 30.059,Expectancy % - 0.40557 and Sharpe Ratio - 0.6621.&#39;, &#39;stop_loss_percent_long&#39;: 0.052, &#39;take_profit_percent_long&#39;: 0.055, &#39;limit_long&#39;: 0.054, &#39;stop_loss_percent_short&#39;: 0.052, &#39;take_profit_percent_short&#39;: 0.055, &#39;limit_short&#39;: 0.054, &#39;margin_leverage&#39;: 1, &#39;TRAILING_ACTIVATE_PCT&#39;: 0.045, &#39;TRAILING_STOP_PCT&#39;: 0.005, &#39;roi_at_50&#39;: 0.054, &#39;roi_at_100&#39;: 0.05, &#39;roi_at_150&#39;: 0.045, &#39;roi_at_200&#39;: 0.04, &#39;roi_at_300&#39;: 0.03, &#39;roi_at_500&#39;: 0.01}}<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   72.710334<br>Equity Final [$]                120933.565868<br>Equity Peak [$]                 175382.890316<br>Return [%]                          20.933566<br>Buy &amp; Hold Return [%]               10.347826<br>Return (Ann.) [%]                  123.644543<br>Volatility (Ann.) [%]              997.602729<br>Sharpe Ratio                         0.123942<br>Sortino Ratio                        1.746241<br>Calmar Ratio                         2.495813<br>Max. Drawdown [%]                  -49.540787<br>Avg. Drawdown [%]                   -7.082874<br>Max. Drawdown Duration       38 days 02:30:00<br>Avg. Drawdown Duration        2 days 21:06:00<br># Trades                                  159<br>Win Rate [%]                        55.345912<br>Best Trade [%]                      17.226306<br>Worst Trade [%]                     -5.328038<br>Avg. Trade [%]                       0.357529<br>Max. Trade Duration           3 days 10:30:00<br>Avg. Trade Duration           0 days 08:54:00<br>Profit Factor                        1.220746<br>Expectancy [%]                       0.490183<br>SQN                                  0.275209<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                          Size  Ent...<br>dtype: object<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   78.859108<br>Equity Final [$]                 89782.288674<br>Equity Peak [$]                 199863.063254<br>Return [%]                         -10.217711<br>Buy &amp; Hold Return [%]              -27.134777<br>Return (Ann.) [%]                  -48.716197<br>Volatility (Ann.) [%]               90.146634<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -55.078098<br>Avg. Drawdown [%]                   -3.816866<br>Max. Drawdown Duration       40 days 14:15:00<br>Avg. Drawdown Duration        1 days 08:00:00<br># Trades                                  129<br>Win Rate [%]                        56.589147<br>Best Trade [%]                       5.476307<br>Worst Trade [%]                     -5.701131<br>Avg. Trade [%]                       0.040376<br>Max. Trade Duration           2 days 19:15:00<br>Avg. Trade Duration           0 days 14:21:00<br>Profit Factor                        1.072351<br>Expectancy [%]                       0.136691<br>SQN                                 -0.153638<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                          Size  Ent...<br>dtype: objec<br><br>Backtest.optimize:   0%|          | 0/120 [00:00&lt;?, ?it/s]<br><br>Backtest.optimize:   0%|          | 0/120 [00:00&lt;?, ?it/s]<br><br>Backtest.optimize:   0%|          | 0/120 [00:00&lt;?, ?it/s]<br><br><br>.................................................................................................................................<br>(output goes on for all the assets and then short listed assets get saved inside custom_assets.txt)<br></pre><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>The provided Python code appears to be related to backtesting a cryptocurrency trading strategy. Here’s a breakdown of the code functionalities:</p><p><strong>Data Processing:</strong></p><ol><li><strong>Function </strong><strong>process_json:</strong> This function reads a JSON file containing cryptocurrency price data.</li><li><strong>Data Cleaning and Transformation:</strong> It cleans and transforms the data by:</li></ol><ul><li>Renaming columns to standard names (e.g., ‘date’ to ‘Date’).</li><li>Converting the ‘Date’ column to datetime format.</li><li>Setting ‘Date’ as the index.</li><li>Filling missing values in the ‘Close’ column with the previous close price.</li><li>Extracting the symbol name from the ‘symbol’ column.</li></ul><ol><li><strong>Technical Indicator Calculation:</strong> The script calculates various technical indicators like ATR, EMA, RSI, etc., using the ta library (assumed to be imported).</li><li><strong>Feature Engineering:</strong> It creates additional features like returns, volatility, volume-based indicators, and momentum-based indicators.</li><li><strong>Data Scaling:</strong> The script scales the data using MinMaxScaler for better model performance during backtesting.</li><li><strong>Reshaping Data:</strong> The data is reshaped into a format suitable for the trading strategy (e.g., sequences of past price data).</li></ol><p><strong>Backtesting Strategy:</strong></p><ol><li><strong>Function </strong><strong>SIGNAL_3:</strong> This function likely defines the trading signals based on some criteria (not shown in the provided code).</li><li><strong>Class </strong><strong>MyCandlesStrat_3:</strong> This class defines the trading strategy using the Backtrader library (assumed to be imported). Key elements include:</li></ol><ul><li><strong>Stop-loss and Take-profit:</strong> These are set based on predefined percentages (BEST_STOP_LOSS_sl_pct_long, etc.) for long and short positions.</li><li><strong>Limit orders:</strong> These are used to ensure order execution within a specific price range.</li><li><strong>Trailing Stop-loss:</strong> The stop-loss is dynamically adjusted based on current profit to lock in gains.</li><li><strong>Time-based profit taking:</strong> Profits are automatically locked in after a certain time holding the asset.</li><li><strong>Leverage:</strong> The strategy uses a predefined leverage multiplier (BEST_LEVERAGE_margin_leverage).</li></ul><p><strong>Backtesting and Analysis:</strong></p><ol><li><strong>Backtest:</strong> The script performs a backtest on the processed data using the MyCandlesStrat_3 strategy with a starting capital of 100000.</li><li><strong>Performance Metrics:</strong> Backtesting results likely include various performance metrics like returns, Sharpe Ratio, Win Rate, and Drawdown (not explicitly shown in the provided code).</li></ol><p><strong>Conditional Logic:</strong></p><ul><li>The script checks if certain performance conditions are met (high return, good profit factor, etc.).</li><li>If the conditions are satisfied, the script potentially saves the trading strategy parameters for this specific asset.</li></ul><p>Usage of ThreadPoolExecutor class for parallel processing of JSON files. Here&#39;s a breakdown of its functionality:</p><p><strong>1. Thread Worker Function (</strong><strong>thread_worker):</strong></p><ul><li>This function takes a single JSON file path as input (file).</li><li>It calls the process_json function (assumed to be defined elsewhere) to process the JSON data.</li><li>It returns the processed result, likely a Pandas DataFrame (df), symbol name (symbol_name), and potentially other custom data (custom_assets).</li></ul><p><strong>2. Main Function (</strong><strong>main):</strong></p><ul><li>It retrieves a list of all JSON files within a specified folder (./tradingview_crypto_assets_15m/).</li><li>It determines the number of available CPU cores using os.cpu_count().</li><li>It sets the max_workers parameter for the ThreadPoolExecutor based on the CPU cores (using all cores if available, defaulting to 1 otherwise).</li><li>It prints the number of cores to be used for processing.</li><li>It creates a ThreadPoolExecutor with the determined max_workers.</li><li>It iterates through the list of JSON files and submits each file path to the thread pool using executor.submit(thread_worker, file). This creates tasks for each file to be processed concurrently.</li><li>It waits for all submitted tasks (futures) to complete using future.result() and stores the results in a list (results).</li><li>It iterates through the processing results:</li><li>If a result is None, it skips to the next iteration (potentially handling errors).</li><li>Otherwise, it unpacks the result (df, symbol_name, and potentially custom_assets).</li><li>It prints information about the processed symbol and the custom assets (if any).</li><li>It conditionally updates custom_assets with additional custom data (custom_data) if it exists (logic not entirely shown).</li></ul><p><strong>3. Continuous Loop Function (</strong><strong>run_continuous_loop):</strong></p><ul><li>This function defines an infinite loop (while True).</li><li>Inside the loop, it calls the main function, presumably to process a batch of JSON files repeatedly.</li></ul><p><strong>4. Starting the Loop:</strong></p><ul><li>The code creates a separate thread using threading.Thread and sets its target to the run_continuous_loop function.</li><li>Finally, it starts the thread, initiating the continuous processing loop.</li></ul><p><strong>Overall, this code snippet demonstrates parallel processing of JSON files using a thread pool based on CPU cores. The loop continuously processes batches of files.</strong></p><p><strong>The code demonstrates a framework for backtesting a cryptocurrency trading strategy that uses technical indicators and incorporates risk management techniques like stop-loss and trailing stop-loss.</strong></p><p><strong>Disclaimer:</strong></p><ul><li>Always remeber that, Backtesting results may not be indicative of future performance.</li><li>Trading cryptocurrencies involves significant risks, and you should always do your own research before making any investment decisions.</li></ul><h4>custom_assets.txt Output:</h4><pre>{<br>    &quot;QNT/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For QNT/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 68.54, Return % - 30.059,Expectancy % - 0.40557 and Sharpe Ratio - 0.6621.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;NMR/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For NMR/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 64.15, Return % - 58.843,Expectancy % - 0.81714 and Sharpe Ratio - 0.7911.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;BNT/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For BNT/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 59.86, Return % - 25.737,Expectancy % - 0.50737 and Sharpe Ratio - 0.6972.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;GMX/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For GMX/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 65.38, Return % - 48.712,Expectancy % - 1.00621 and Sharpe Ratio - 1.293.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;T/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For T/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 64.71, Return % - 133.093,Expectancy % - 1.47402 and Sharpe Ratio - 0.596.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;MATIC/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For MATIC/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 66.9, Return % - 47.395,Expectancy % - 0.31207 and Sharpe Ratio - 0.7049.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;OP/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;2nd backtest with Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For OP/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 60.0, Return % - 67.048, Expectancy % - 0.57196 and Sharpe Ratio - 0.599.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.034,<br>        &quot;take_profit_percent_long&quot;: 0.058,<br>        &quot;limit_long&quot;: 0.05777,<br>        &quot;stop_loss_percent_short&quot;: 0.092,<br>        &quot;take_profit_percent_short&quot;: 0.081,<br>        &quot;limit_short&quot;: 0.09163,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.061,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.062,<br>        &quot;roi_at_50&quot;: 0.06,<br>        &quot;roi_at_100&quot;: 0.046,<br>        &quot;roi_at_150&quot;: 0.052,<br>        &quot;roi_at_200&quot;: 0.028,<br>        &quot;roi_at_300&quot;: 0.018,<br>        &quot;roi_at_500&quot;: 0.024<br>    },<br>    &quot;ENJ/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For ENJ/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 73.83, Return % - 116.707,Expectancy % - 0.78547 and Sharpe Ratio - 0.7876.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;OMNI/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For OMNI/USDT:USDT backtest was done from 2024-04-19 01:45:00 upto 2024-05-14 04:00:00 for a duration of 25 days 02:15:00 using time frame of 15m with Win Rate % - 61.36, Return % - 15.375,Expectancy % - 0.53339 and Sharpe Ratio - 0.606.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;ATOM/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;cnn_model_2d_15m_ETH_May_16_SL55_TP55_ShRa_0.91_time_20240516025817.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For ATOM/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 67.98, Return % - 3.37,Expectancy % - 0.32805 and Sharpe Ratio - 0.1328.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.052,<br>        &quot;take_profit_percent_long&quot;: 0.055,<br>        &quot;limit_long&quot;: 0.054,<br>        &quot;stop_loss_percent_short&quot;: 0.052,<br>        &quot;take_profit_percent_short&quot;: 0.055,<br>        &quot;limit_short&quot;: 0.054,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.045,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.054,<br>        &quot;roi_at_100&quot;: 0.05,<br>        &quot;roi_at_150&quot;: 0.045,<br>        &quot;roi_at_200&quot;: 0.04,<br>        &quot;roi_at_300&quot;: 0.03,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>.................................... <br>(all 27 assets got short listed as per paramter given by us during <br>optimization and backtesting with downlaoded data for the neural <br>network model we trained our model on)<br>}</pre><p>The provided data snippet appears to be the results of backtesting a cryptocurrency trading strategy on multiple assets. Here’s a breakdown of the information:</p><p><strong>Structure:</strong></p><ul><li>It’s a dictionary with currency pairs (e.g., “ATOM/USDT:USDT”) as keys.</li></ul><p><strong>Content for Each Asset:</strong></p><ul><li><strong>Optimizer_used:</strong> This specifies the optimization method used for backtesting (here, “1st backtest — Expectancy”).</li><li><strong>model_name:</strong> This indicates the model name used for the trading strategy (“transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras”).</li><li><strong>Optimizer_result:</strong> This is a detailed description of the backtesting results for the specific asset. It includes:</li><li>Start and end date of the backtest.</li><li>Backtesting duration.</li><li>Timeframe used (e.g., 15m).</li><li>Win Rate percentage.</li><li>Return percentage.</li><li>Expectancy percentage.</li><li>Sharpe Ratio.</li><li><strong>stop_loss_percent_long/short:</strong> These define the stop-loss percentages for long and short positions.</li><li><strong>take_profit_percent_long/short:</strong> These define the take-profit percentages for long and short positions.</li><li><strong>limit_long/short:</strong> These define the maximum price deviation allowed for entry orders (likely to prevent excessive slippage).</li><li><strong>margin_leverage:</strong> This specifies the leverage used for margin trading (set to 1 here, indicating no leverage).</li><li><strong>TRAILING_ACTIVATE_PCT &amp; TRAILING_STOP_PCT:</strong> These define parameters for trailing stop-loss, which adjusts the stop-loss dynamically.</li><li><strong>roi_at_50, 100, 150, etc.:</strong> These are potentially profit targets at different holding durations (e.g., roi_at_50 might be the target profit for holding 50% of the time).</li></ul><p><strong>Interpretation:</strong></p><ul><li>This data likely comes from a backtesting tool that evaluated a specific trading strategy on various cryptocurrencies.</li><li>The results show performance metrics like win rate, return, and Sharpe Ratio for each asset.</li><li>Stop-loss, take-profit, and leverage parameters define the risk management aspects of the strategy.</li></ul><p><strong>Shortlisted Assets and Saving:</strong></p><ul><li>The statement mentions “shortlisted assets” but doesn’t explicitly show how they are identified. It’s possible that assets meeting certain performance criteria (based on the backtesting results) are considered shortlisted.</li><li>These shortlisted assets are potentially saved in a file named “saved_assets.txt” in the same format as the provided data snippet.</li></ul><p><strong>Disclaimer:</strong></p><ul><li>Backtesting results are not a guarantee of future performance.</li><li>Trading cryptocurrencies involves significant risks, and you should always do your own research before making any investment decisions.</li></ul><h3>Conclusion:</h3><p>This article describes a cryptocurrency trading system that utilizes a neural network model (specifically a 2D CNN model) and a trading bot called VishvaAlgo. Here’s a breakdown:</p><p><strong>Data and Model Training:</strong></p><ul><li>The system downloads historical data for over 250+ cryptocurrency assets on Binance Futures from TradingView.</li><li>It trains a 2D CNN-based neural network model, achieving a claimed return of 9,800%+ on Ethereum (ETHUSDT) in 3 years on a 15-minutes time frame data with over 100,000 rows trained model with 193+ features used for finding the best possible estimation for going neutral, long and short using the classification based 2D CNN model. (<strong>important to note: this returns vary from system to system based on trained data and needs re-verification</strong>).</li></ul><p><strong>Hyperparameter Optimization and Asset Selection:</strong></p><ul><li>The system uses Hyperopt (a hyperparameter optimization library) to identify the most suitable assets for the trained model among the downloaded data.</li><li>Each shortlisted asset has a unique set of parameters like stop-loss, take-profit, leverage, tailored for the model’s predictions.</li></ul><p><strong>VishvaAlgo — The Trading Bot:</strong></p><ul><li>VishvaAlgo helps automate live trading using the trained model and the shortlisted assets with their pre-defined parameters.</li><li>The bot offers easy integration with various neural network models for classification.</li><li>A video explaining VishvaAlgo’s features and benefits is available <strong><em>— </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></li></ul><p><strong>Benefits of VishvaAlgo:</strong></p><ul><li>Automates trading based on the trained model and optimized asset selection.</li><li>Offers easy integration with user-defined neural network models.</li><li>Provided detailed explanation and installation guide for purchase through my Patreon page.</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FKWAvZraD5aM%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DKWAvZraD5aM&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FKWAvZraD5aM%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/fa4c736694b0d947204a89e359dce943/href">https://medium.com/media/fa4c736694b0d947204a89e359dce943/href</a></iframe><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p><strong><em>Disclaimer:</em></strong><em> Trading involves risk. Past performance is not indicative of future results. VishvaAlgo is a tool to assist traders and does not guarantee profits. Please trade responsibly and conduct thorough research before making investment decisions.</em></p><p>Warm Regards,</p><p><strong>Puranam Pradeep Picasso</strong></p><p><strong>Linkedin</strong> — <a href="https://www.linkedin.com/in/puranampradeeppicasso/">https://www.linkedin.com/in/puranampradeeppicasso/</a></p><p><strong>Patreon </strong>— <a href="https://patreon.com/pppicasso">https://patreon.com/pppicasso</a></p><p><strong>Facebook </strong>— <a href="https://www.facebook.com/puranam.p.picasso/">https://www.facebook.com/puranam.p.picasso/</a></p><p><strong>Twitter</strong> — <a href="https://twitter.com/picasso_999">https://twitter.com/picasso_999</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2105ee7e2893" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[33,885+% Returns in 3 years on Cryptocurrency using Neural Network Transformer Model and short…]]></title>
            <link>https://imbuedeskpicasso.medium.com/33-885-returns-in-3-years-on-cryptocurrency-using-neural-network-transformer-model-and-short-49d0fb7ab78b?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/49d0fb7ab78b</guid>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[algorithmic-trading]]></category>
            <category><![CDATA[neural-networks]]></category>
            <category><![CDATA[cryptocurrency-investment]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Wed, 15 May 2024 10:16:09 GMT</pubDate>
            <atom:updated>2024-05-15T10:16:09.655Z</atom:updated>
            <content:encoded><![CDATA[<h3>33,885+% Returns in 3 years on Cryptocurrency using Neural Network Transformer Model and short listing Best Assets for Trading — VishvaAlgo Machine Learning Trading Bot</h3><p>Unleashing the power of Neural Networks for creating Trading Bot for maximum profits.</p><h3>Introduction:</h3><p>Welcome to the world of algorithmic trading and machine learning, where innovation meets profitability. Over the past three years, I’ve dedicated myself to developing algorithmic trading systems that harness the power of various strategies. Through relentless experimentation and refinement, I’ve achieved impressive returns across multiple strategies, delighting members of<a href="https://www.patreon.com/pppicasso"><strong><em> my Patreon community with consistent profits</em></strong></a>.</p><p>In the pursuit of excellence, I recently launched <a href="https://www.patreon.com/pppicasso/shop"><strong><em>VishvaAlgo, a machine learning-based algorithmic trading system that leverages neural network classification models</em></strong></a><strong><em>.</em></strong> This cutting-edge platform has already demonstrated remarkable results, delivering exceptional returns to traders in the cryptocurrency market. Through a series of articles and practical demonstrations, I’ve shared insights on transitioning from traditional algorithmic trading to deploying practical machine learning models, showcasing their effectiveness in real-world trading environments.</p><p>In this article, we delve into the trans-formative potential of algorithmic trading and machine learning, focusing on the effectiveness of neural networks, specifically the Transformer technique. Building upon our past successes, we set out to demonstrate the remarkable profitability achievable with advanced machine learning models, using Bitcoin (BTC) and Ethereum (ETH) as our primary assets.</p><p>Our analysis focuses on Ethereum pricing in USDT, utilizing 15-minute candlestick data spanning from January 1st, 2021, to October 22nd, 2023, comprising over 97,000 rows of data and more than 190 features. By leveraging neural network models for prediction, we aim to identify optimal long and short positions, showcasing the potential of deep learning in financial markets.</p><blockquote><em>Our story is one of relentless innovation, fueled by a burning desire to unlock the full potential of Deep Learning in the pursuit of profit. In this article, we invite you to join us as we unravel the exciting tale of our transformation from humble beginnings to groundbreaking success.</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*5mb4wtPIIv3oEm0i.jpg" /></figure><h3>Our Algorithmic Trading Vs/+ Machine Learning Vs/+ Deep Learning Journey so far?</h3><h4>Stage 1:</h4><p>We have developed a crypto Algorithmic Strategy which gave us huge profits when ran on multiple crypto assets (138+) with a profit range of 8787%+ in span of 3 years (almost).</p><h4>“The 8787%+ ROI Algo Strategy Unveiled for Crypto Futures! Revolutionized With Famous RSI, MACD, Bollinger Bands, ADX, EMA” — <a href="https://imbuedeskpicasso.medium.com/the-8787-roi-algo-strategy-unveiled-for-crypto-futures-22a5dd88c4a5">Link</a></h4><p>We have run live trading in dry-run mode for the same for 7 days and details about the same have been shared in another article.</p><h4>“Freqtrade Revealed: 7-Day Journey in Algorithmic Trading for Crypto Futures Market” — <a href="https://imbuedeskpicasso.medium.com/freqtrade-revealed-7-day-journey-in-algorithmic-trading-for-crypto-futures-market-1032c409d6bd">Link</a></h4><p>After<strong> successful backtest results and forward testing</strong> (live trading in dry-run mode), we planned to improve the odds of making more profit for the same. (To lower stop-losses, increase odds of winning more , reduce risk factor and other important things)</p><h4>Stage 2:</h4><p>We have worked on developing a strategy alone without freqtrade setup (avoiding trailing stop loss, multiple asst parallel running, higher risk management setups that freqtrade provides for free (it is a free open source platform) and then tested it in market, then optimized it using hyper parameters and then , we got some +ve profits from the strategy</p><h4>“How I achieved 3000+% Profit in Backtesting for Various Algorithmic Trading Bots and how you can do the same for your Trading Strategies — Using Python Code” — <a href="https://medium.com/p/b1de0d20cd39">Link</a></h4><h4>Stage 3:</h4><p>As we have tested our strategy only on 1 Asset , i.e; BTC/USDT in crypto market, we wanted to know if we can segregate the whole collective assets we have (Which we have used for developing Freqtrade Strategy earlier) segregate them into different clusters based on their volatility, it becomes easy to do trading for certain volatile assets and won’t hit huge stop-losses for others if worked on implementing based on coin volatility.</p><p>We used <strong>K-nearest Neighbors (KNN Means)</strong> to identify different clusters of assets out of 138 crypto assets we use in our freqtrade strategy, which gave us 8<strong>000+% profits</strong> during backtest.</p><h4>“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -1 (K-Nearest Neighbors)” — <a href="https://medium.com/p/0c143a6ab7cb">Link</a></h4><h4>Stage 4:</h4><p>Now, we want to introduce Unsupervised Machine Learning model — Hidden Markov Model (HMMs) to identify trends in the market and trade during only profitable trends and avoid sudden pumps, dumps in market, avoid negative trends in market. Below explanation unravels the same.</p><h4>“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -2 (Hidden Markov Model — HMM)” — <a href="https://imbuedeskpicasso.medium.com/hyper-optimized-algorithmic-strategy-vs-machine-learning-models-part-2-hidden-markov-model-98e4894e3d9e">Link</a></h4><h4>Stage 5:</h4><p>I worked on using XGBoost Classifier to identify long and short trades using our old signal. Before using it, we ensured that the signal algorithm we had previously developed was hyper-optimized. Additionally, we introduced different stop-loss and take-profit parameters for this setup, causing the target values to change accordingly. We also adjusted the parameters used for obtaining profitable trades based on the stop-loss and take-profit values. Later, we tested the basic XGBClassifier setup and then enhanced the results by adding re-sampling methods. Our target classes, which include 0’s (neutral), 1’s (for long trades), and 2’s (for short trades), were imbalanced due to the trade execution timing. To address this imbalance, we employed re-sampling methods and performed hyper-optimization of the classifier model. Subsequently, we evaluated if the model performed better with other classifier models such as SVC, CatBoost, and LightGBM, in combination with LSTM and XGBoost. Finally, we concluded by analyzing the results and determining feature importance parameters to identify the most productive features.</p><h4>“Hyper Optimized Algorithmic Strategy Vs/+ Machine Learning Models Part -3 (XGBoost Classifier , LGBM Classifier, CatBoost Classifier, SVC, LSTM with XGB and Multi level Hyper-optimization)” — <a href="https://imbuedeskpicasso.medium.com/hyper-optimized-algorithmic-strategy-vs-machine-learning-models-part-3-xgboost-classifier-6c4f49c58800">Link</a></h4><h4>Stage 6:</h4><p>In that stage, I utilized the CatBoostClassifier along with resampling and sample weights. I incorporated multiple time frame indicators such as volume, momentum, trend, and volatility into my model. After running the model, I performed ensembling techniques to enhance its overall performance. The results of my analysis showed a significant increase in profit from 54% to over 4600% during backtesting. Additionally, I highlighted the impressive performance metrics including recall, precision, accuracy, and F1 score, all exceeding 80% for each of the three trading classes (0 for neutral, 1 for long, and 2 for short trades).</p><h4>“From 54% to a Staggering 4648%: Catapulting Cryptocurrency Trading with CatBoost Classifier, Machine Learning Model at Its Best” — <a href="https://imbuedeskpicasso.medium.com/from-54-to-a-staggering-4648-catapulting-cryptocurrency-trading-with-catboost-classifier-75ac9f10c8fc">Link</a></h4><h4>Stage 7:</h4><p>In this stage, the <strong><em>ensemble method combining TCN and LSTM neural network models</em></strong> has demonstrated exceptional performance across various datasets, outperforming individual models and even surpassing buy and hold strategies. This underscores the effectiveness of ensemble learning in improving prediction accuracy and robustness.</p><h4>“Bitcoin/BTC 4750%+ , Etherium/ETH 11,270%+ profit in 1023 days using Neural Networks, Algorithmic Trading Vs/+ Machine Learning Models Vs/+ Deep Learning Model Part — 4 (TCN, LSTM, Transformer with Ensemble Method)” — <a href="https://medium.com/p/d5a644cdc36f/">Link</a></h4><h4>Stage 8:</h4><p>Experience the future of trading with VishvaAlgo v3.8. With its advanced features, unparalleled risk management capabilities, and ease of integration of ML and neural network models, VishvaAlgo is the ultimate choice for traders seeking consistent profits and peace of mind. Don’t miss out on this opportunity to revolutionize your trading journey.</p><blockquote><strong><em>Purchase Link:</em></strong><em> </em><a href="https://www.patreon.com/pppicasso/shop/vishvaalgo-v3-0-live-crypto-trading-170240?source=storefront">VishvaAlgo V3.8 Live Crypto Trading Using Machine Learning Model</a></blockquote><h4>“VishvaAlgo v3.0 — Revolutionize Your Live Cryptocurrency Trading system Enhanced with Machine Learning (Neural Network) Model. Live Profits Screenshots Shared” — <a href="https://medium.com/p/f4ca0facae7e/">Link</a></h4><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><h3>The code Explanation:</h3><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><pre># Remove Future Warnings<br>import warnings<br>warnings.simplefilter(action=&#39;ignore&#39;, category=FutureWarning)<br><br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br><br># General<br>import numpy as np<br><br># Data Management<br>import pandas as pd<br><br># Machine Learning<br>from catboost import CatBoostClassifier<br>from sklearn.model_selection import train_test_split<br>from sklearn.model_selection import RandomizedSearchCV, cross_val_score<br>from sklearn.model_selection import RepeatedStratifiedKFold<br>from sklearn.linear_model import LogisticRegression<br><br># ensemble<br>from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier<br>from sklearn.ensemble import StackingClassifier<br>from sklearn.ensemble import VotingClassifier<br><br>#Sampling Methods<br>from imblearn.over_sampling import ADASYN<br><br>#Scaling<br>from sklearn.preprocessing import MinMaxScaler<br><br># Binary Classification Specific Metrics<br>from sklearn.metrics import RocCurveDisplay as plot_roc_curve<br><br># General Metrics<br>from sklearn.metrics import accuracy_score<br>from sklearn.metrics import precision_score<br>from sklearn.metrics import confusion_matrix, classification_report, roc_curve, roc_auc_score, accuracy_score<br>from sklearn.metrics import precision_score<br>from sklearn.metrics import ConfusionMatrixDisplay<br><br><br># Reporting<br>import matplotlib.pyplot as plt<br>from matplotlib.pylab import rcParams<br>from xgboost import plot_tree<br><br>#Backtesting<br>from backtesting import Backtest<br>from backtesting import Strategy<br><br>#hyperopt<br>from hyperopt import fmin, tpe, hp<br><br>from pandas_datareader.data import DataReader<br><br>import json<br>from datetime import datetime<br>import talib as ta<br>import ccxt<br><br># from sklearn.model_selection import train_test_split<br>from sklearn.utils import class_weight<br>from keras.models import Sequential<br>from keras.layers import LSTM, Dense, Dropout<br>from keras.optimizers import Adam<br># from keras.wrappers.scikit_learn import KerasClassifier<br>from sklearn.ensemble import VotingClassifier<br>from hyperopt import fmin, tpe, hp, STATUS_OK, Trials</pre><p><strong>Import Statements (Lines 1–18):</strong></p><ul><li><strong>Warnings (Lines 1–4):</strong></li><li>These lines suppress warnings that might appear during execution. While this can be helpful for uninterrupted training, it’s generally recommended to address the warnings themselves for better debugging and understanding potential issues.</li><li><strong>General Libraries (Lines 5–7):</strong></li><li>numpy (np): Provides numerical computing capabilities, often used for array operations and mathematical functions. Not directly applicable to article writing.</li><li>pandas (pd): Used for data manipulation, analysis, and visualization. Essential for working with structured data in articles (e.g., tables, charts).</li><li><strong>Machine Learning Libraries (Lines 8–13):</strong></li><li>catboost (not explicitly imported here): Provides a powerful gradient boosting library for machine learning tasks. Not directly relevant to article writing unless you&#39;re discussing specific machine learning algorithms.</li><li>scikit-learn (various submodules): A comprehensive machine learning library. Parts might be useful for illustrating concepts or comparing approaches in articles:</li><li>train_test_split: Splits data into training and testing sets for model evaluation.</li><li>RandomizedSearchCV, cross_val_score, RepeatedStratifiedKFold: Techniques for hyperparameter tuning and model evaluation (cross-validation).</li><li>LogisticRegression: A linear classification model. Potentially relevant if discussing classification algorithms.</li><li><strong>Ensemble Methods (Lines 14–16):</strong></li><li>scikit-learn (submodules): Techniques for combining multiple models to improve performance. Not directly applicable to article writing.</li><li><strong>Sampling Methods (Line 17):</strong></li><li>imblearn: Provides tools for handling imbalanced datasets (where classes have unequal sizes). Not typically used in article writing itself.</li><li><strong>Scaling (Line 18):</strong></li><li>scikit-learn: Techniques for normalizing or standardizing data (often necessary for machine learning models). Can be relevant in articles to explain data preprocessing steps.</li></ul><p><strong>Metrics (Lines 19–33):</strong></p><ul><li><strong>Binary Classification Metrics (Lines 19–21):</strong></li><li>scikit-learn: Used to evaluate the performance of classification models, particularly for binary classification (two classes). Not directly applicable to article writing unless discussing model evaluation metrics.</li><li><strong>General Metrics (Lines 22–33):</strong></li><li>scikit-learn: Various metrics for evaluating model performance across different classification tasks. Can be useful in articles to explain how models are assessed:</li><li>accuracy_score: Proportion of correct predictions.</li><li>precision_score: Proportion of true positives among predicted positives.</li><li>confusion_matrix: Visualization of how many instances were classified correctly or incorrectly for each class.</li><li>classification_report: Detailed report on model performance, including precision, recall, F1-score, and support for each class.</li><li>roc_curve, roc_auc_score: Measures for assessing the Receiver Operating Characteristic (ROC) curve, which helps evaluate a model&#39;s ability to discriminate between classes.</li></ul><p><strong>Reporting (Lines 34–36):</strong></p><ul><li>matplotlib.pyplot (plt): Used for creating visualizations like charts and graphs. Essential for presenting data and model results in articles.</li></ul><p><strong>Backtesting (Lines 37–38):</strong></p><ul><li>backtesting: Library for backtesting trading strategies. Not relevant to article writing unless discussing financial applications of machine learning.</li></ul><p><strong>Hyperparameter Optimization (Lines 39–42):</strong></p><ul><li>hyperopt: Library for hyperparameter tuning (finding the best settings for machine learning models). Not directly applicable to article writing.</li></ul><p><strong>Data Retrieval (Line 43):</strong></p><ul><li>pandas_datareader: Facilitates data retrieval from various financial data sources. Not typically used in article writing itself.</li></ul><p><strong>Other Imports (Lines 44–50):</strong></p><ul><li>json: For working with JSON data format (not directly used here).</li><li>datetime: For working with date and time objects. Can be useful in articles for handling time-series data.</li><li>talib: Technical analysis library for financial markets (not directly used here).</li><li>ccxt (not explicitly imported here): Library for interacting with cryptocurrency exchanges (not relevant to article writing).</li></ul><p><strong>Context:</strong></p><ul><li>Each library and module is imported with a specific purpose, such as data manipulation, machine learning, evaluation, visualization, backtesting, hyperparameter optimization, etc.</li><li>These libraries and modules will be used throughout the code for various tasks like data preprocessing, model training, evaluation, optimization, and visualization.</li></ul><pre># Define the path to your JSON file<br>file_path = &#39;./ETH_USDT_USDT-15m-futures.json&#39;<br><br># Open the file and read the data<br>with open(file_path, &quot;r&quot;) as f:<br>    data = json.load(f)<br><br>df = pd.DataFrame(data)<br><br># Extract the OHLC data (adjust column names as needed)<br># ohlc_data = df[[&quot;date&quot;,&quot;open&quot;, &quot;high&quot;, &quot;low&quot;, &quot;close&quot;, &quot;volume&quot;]]<br>df.rename(columns={0: &quot;Date&quot;, 1: &quot;Open&quot;, 2: &quot;High&quot;,3: &quot;Low&quot;, 4: &quot;Adj Close&quot;, 5: &quot;Volume&quot;}, inplace=True)<br><br># Convert timestamps to datetime objects<br>df[&quot;Date&quot;] = pd.to_datetime(df[&#39;Date&#39;] / 1000, unit=&#39;s&#39;)<br><br>df.set_index(&quot;Date&quot;, inplace=True)<br><br># Format the date index<br>df.index = df.index.strftime(&quot;%m-%d-%Y %H:%M&quot;)<br>df[&#39;Close&#39;] = df[&#39;Adj Close&#39;]<br><br># print(df.dropna(), df.describe(), df.info())<br><br>data = df<br><br>data</pre><p>To analyze historical cryptocurrency futures data, we can first load the data from a JSON file. The provided code demonstrates how to use Python’s json library to parse the JSON content into a dictionary. We then convert this dictionary into a pandas DataFrame for easier manipulation. The DataFrame is cleaned and transformed by renaming columns, converting timestamps to datetime objects, setting the date as the index, and formatting the date display for better readability.</p><p><strong>Here’s the step-by-step explanation of the code:</strong></p><p><strong>1. Loading JSON Data:</strong></p><ul><li>The code defines a file path (file_path) to a JSON file containing cryptocurrency data (presumably in the format of Open-High-Low-Close-Volume for Ethereum futures contracts traded with USDT).</li><li>It opens the file for reading (with open(file_path, &quot;r&quot;) as f:) and uses json.load(f) to parse the JSON content into a Python dictionary (data).</li></ul><p><strong>2. Converting to DataFrame:</strong></p><ul><li>The code creates a pandas DataFrame (df) from the loaded dictionary (data). A DataFrame is a tabular data structure similar to a spreadsheet, making it easier to work with and analyze the data.</li></ul><p><strong>3. Data Cleaning and Transformation:</strong></p><ul><li>This part assumes the JSON data has columns with numerical indices (0, 1, 2, etc.) instead of meaningful names. It renames these columns to more descriptive labels (&quot;Date&quot;, &quot;Open&quot;, &quot;High&quot;, &quot;Low&quot;, &quot;Adj Close&quot;, &quot;Volume&quot;) using df.rename(columns={...}, inplace=True).</li><li>It converts the &quot;Date&quot; column from timestamps (likely in milliseconds since some epoch) to datetime objects using pd.to_datetime(). This makes it easier to work with dates and perform time-based operations.</li><li>The code sets the &quot;Date&quot; column as the index of the DataFrame using df.set_index(&quot;Date&quot;, inplace=True). This allows you to efficiently access and filter data based on dates.</li><li>It formats the date index using df.index.strftime(&quot;%m-%d-%Y %H:%M&quot;) to display dates in a more readable format (e.g., &quot;05-14-2024 16:35&quot;).</li><li>Finally, it assigns the column named &quot;Adj Close&quot; (assuming it represents the adjusted closing price) to a variable named &quot;Close&quot; for potentially clearer reference.</li></ul><pre># Assuming you have a DataFrame named &#39;df&#39; with columns &#39;Open&#39;, &#39;High&#39;, &#39;Low&#39;, &#39;Close&#39;, &#39;Adj Close&#39;, and &#39;Volume&#39;<br>target_prediction_number = 2<br>time_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br>name_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br><br>df = data.copy()<br>new_columns = []<br>for period in time_periods:<br>    for nperiod in name_periods:<br>        df[f&#39;ATR_{period}&#39;] = ta.ATR(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], timeperiod=period)<br>        df[f&#39;EMA_{period}&#39;] = ta.EMA(df[&#39;Close&#39;], timeperiod=period*2)<br>        df[f&#39;RSI_{period}&#39;] = ta.RSI(df[&#39;Close&#39;], timeperiod=period*0.5)<br>        df[f&#39;VWAP_{period}&#39;] = ta.SUM(df[&#39;Volume&#39;] * (df[&#39;High&#39;] + df[&#39;Low&#39;] + df[&#39;Close&#39;]) / 3, timeperiod=period) / ta.SUM(df[&#39;Volume&#39;], timeperiod=period)<br>        df[f&#39;ROC_{period}&#39;] = ta.ROC(df[&#39;Close&#39;], timeperiod=period)<br>        df[f&#39;KC_upper_{period}&#39;] = ta.EMA(df[&#39;High&#39;], timeperiod=period*2)<br>        df[f&#39;KC_middle_{period}&#39;] = ta.EMA(df[&#39;Low&#39;], timeperiod=period*2)<br>        df[f&#39;Donchian_upper_{period}&#39;] = ta.MAX(df[&#39;High&#39;], timeperiod=period)<br>        df[f&#39;Donchian_lower_{period}&#39;] = ta.MIN(df[&#39;Low&#39;], timeperiod=period)<br>        macd, macd_signal, _ = ta.MACD(df[&#39;Close&#39;], fastperiod=(period + 12), slowperiod=(period + 26), signalperiod=(period + 9))<br>        df[f&#39;MACD_{period}&#39;] = macd<br>        df[f&#39;MACD_signal_{period}&#39;] = macd_signal<br>        bb_upper, bb_middle, bb_lower = ta.BBANDS(df[&#39;Close&#39;], timeperiod=period*0.5, nbdevup=2, nbdevdn=2)<br>        df[f&#39;BB_upper_{period}&#39;] = bb_upper<br>        df[f&#39;BB_middle_{period}&#39;] = bb_middle<br>        df[f&#39;BB_lower_{period}&#39;] = bb_lower<br>        df[f&#39;EWO_{period}&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=(period+5)) - ta.SMA(df[&#39;Close&#39;], timeperiod=(period+35))<br><br>        <br>    <br>df[&quot;Returns&quot;] = (df[&quot;Adj Close&quot;] / df[&quot;Adj Close&quot;].shift(target_prediction_number)) - 1<br>df[&quot;Range&quot;] = (df[&quot;High&quot;] / df[&quot;Low&quot;]) - 1<br>df[&quot;Volatility&quot;] = df[&#39;Returns&#39;].rolling(window=target_prediction_number).std()<br><br># Volume-Based Indicators<br>df[&#39;OBV&#39;] = ta.OBV(df[&#39;Close&#39;], df[&#39;Volume&#39;])<br>df[&#39;ADL&#39;] = ta.AD(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], df[&#39;Volume&#39;])<br><br><br># Momentum-Based Indicators<br>df[&#39;Stoch_Oscillator&#39;] = ta.STOCH(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;])[0]<br># Calculate the Elliott Wave Oscillator (EWO)<br>#df[&#39;EWO&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=5) - ta.SMA(df[&#39;Close&#39;], timeperiod=35)<br><br># Volatility-Based Indicators<br># df[&#39;ATR&#39;] = ta.ATR(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], timeperiod=14)<br># df[&#39;BB_upper&#39;], df[&#39;BB_middle&#39;], df[&#39;BB_lower&#39;] = ta.BBANDS(df[&#39;Close&#39;], timeperiod=20, nbdevup=2, nbdevdn=2)<br># df[&#39;KC_upper&#39;], df[&#39;KC_middle&#39;] = ta.EMA(df[&#39;High&#39;], timeperiod=20), ta.EMA(df[&#39;Low&#39;], timeperiod=20)<br># df[&#39;Donchian_upper&#39;], df[&#39;Donchian_lower&#39;] = ta.MAX(df[&#39;High&#39;], timeperiod=20), ta.MIN(df[&#39;Low&#39;], timeperiod=20)<br><br># Trend-Based Indicators<br># df[&#39;MA&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=20)<br># df[&#39;EMA&#39;] = ta.EMA(df[&#39;Close&#39;], timeperiod=20)<br>df[&#39;PSAR&#39;] = ta.SAR(df[&#39;High&#39;], df[&#39;Low&#39;], acceleration=0.02, maximum=0.2)<br><br># Set pandas option to display all columns<br>pd.set_option(&#39;display.max_columns&#39;, None)<br><br># Displaying the calculated indicators<br>print(df.tail())<br><br>df.dropna(inplace=True)<br>print(&quot;Length: &quot;, len(df))<br>df</pre><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>This code demonstrates the calculation of various technical indicators using the talib library. The code iterates through different time periods to compute indicators like Average True Range (ATR), Exponential Moving Average (EMA), Relative Strength Index (RSI), and several others. Additionally, it calculates features like returns, range, and volatility to potentially use as input features for machine learning models.</p><p><strong>1. Technical Indicator Calculations:</strong></p><ul><li>The code iterates through two lists, time_periods and name_periods (which seem to have the same values here). This might be a placeholder for using different sets of periods for the indicators in the future.</li><li>Within the loops, it calculates numerous technical indicators for each specified time period (period) using talib functions:</li><li><strong>Average True Range (ATR):</strong> Measures market volatility (df[f&#39;ATR_{period}&#39;]).</li><li><strong>Exponential Moving Average (EMA):</strong> Calculates EMAs with a period twice the loop’s period (df[f&#39;EMA_{period}&#39;]).</li><li><strong>Relative Strength Index (RSI):</strong> Calculates RSI with a period half the loop’s period (df[f&#39;RSI_{period}&#39;]).</li><li><strong>Volume-Weighted Average Price (VWAP):</strong> Calculates VWAP for the period (df[f&#39;VWAP_{period}&#39;]).</li><li><strong>Rate of Change (ROC):</strong> Calculates ROC for the period (df[f&#39;ROC_{period}&#39;]).</li><li><strong>Keltner Channels (KC):</strong> Calculates upper and middle bands based on EMAs of highs and lows (df[f&#39;KC_upper_{period}&#39;], df[f&#39;KC_middle_{period}&#39;]).</li><li><strong>Donchian Channels:</strong> Calculates upper and lower bands based on maximum and minimum highs/lows within the period (df[f&#39;Donchian_upper_{period}&#39;], df[f&#39;Donchian_lower_{period}&#39;]).</li><li><strong>Moving Average Convergence Divergence (MACD):</strong> Calculates MACD and its signal line for the period (df[f&#39;MACD_{period}&#39;], df[f&#39;MACD_signal_{period}&#39;]).</li><li><strong>Bollinger Bands (BB):</strong> Calculates upper, middle, and lower bands for the period (df[f&#39;BB_upper_{period}&#39;], df[f&#39;BB_middle_{period}&#39;], df[f&#39;BB_lower_{period}&#39;]).</li><li><strong>Elliott Wave Oscillator (EWO):</strong> Calculates EWO for the period (df[f&#39;EWO_{period}&#39;]).</li><li><strong>Target Prediction and Feature Engineering:</strong></li><li>The code defines a target_prediction_number (presumably the number of periods ahead you aim to predict).</li><li>It calculates “Returns” as the percentage change in adjusted close prices over the target_prediction_number periods (df[&quot;Returns&quot;]).</li><li>It calculates “Range” as the difference between high and low prices divided by the low price (df[&quot;Range&quot;]).</li><li>It calculates “Volatility” as the rolling standard deviation of returns over the target_prediction_number periods (df[&quot;Volatility&quot;]).</li><li><strong>Additional Indicators:</strong></li><li>The code calculates On-Balance Volume (OBV) and Accumulation Distribution Line (ADL) using talib functions (df[&#39;OBV&#39;], df[&#39;ADL&#39;]).</li><li>It calculates the Stochastic Oscillator using talib (df[&#39;Stoch_Oscillator&#39;]).</li><li>It calculates the Parabolic Stop and Reversal (PSAR) using talib (df[&#39;PSAR&#39;]).</li></ul><h3>Data- Preprocessing — Setting up “Target” value for estimating future predictive values</h3><pre># Target flexible way<br>pipdiff_percentage = 0.01  # 1% (0.01) of the asset&#39;s price for TP<br>SLTPRatio = 2.0  # pipdiff/Ratio gives SL<br>def mytarget(barsupfront, df1):<br>    length = len(df1)<br>    high = list(df1[&#39;High&#39;])<br>    low = list(df1[&#39;Low&#39;])<br>    close = list(df1[&#39;Close&#39;])<br>    open_ = list(df1[&#39;Open&#39;])  # Renamed &#39;open&#39; to &#39;open_&#39; to avoid conflict with Python&#39;s built-in function<br>    trendcat = [None] * length<br>    for line in range(0, length - barsupfront - 2):<br>        valueOpenLow = 0<br>        valueOpenHigh = 0<br>        for i in range(1, barsupfront + 2):<br>            value1 = open_[line + 1] - low[line + i]<br>            value2 = open_[line + 1] - high[line + i]<br>            valueOpenLow = max(value1, valueOpenLow)<br>            valueOpenHigh = min(value2, valueOpenHigh)<br>            if (valueOpenLow &gt;= close[line + 1] * pipdiff_percentage) and (<br>                    -valueOpenHigh &lt;= close[line + 1] * pipdiff_percentage / SLTPRatio):<br>                trendcat[line] = 2  # -1 downtrend<br>                break<br>            elif (valueOpenLow &lt;= close[line + 1] * pipdiff_percentage / SLTPRatio) and (<br>                    -valueOpenHigh &gt;= close[line + 1] * pipdiff_percentage):<br>                trendcat[line] = 1  # uptrend<br>                break<br>            else:<br>                trendcat[line] = 0  # no clear trend<br><br>    return trendcat</pre><p>This code defines a function mytarget that attempts to identify potential trends and set target values accordingly. It calculates the difference between the open price and upcoming highs/lows within a specified timeframe (barsupfront). Based on these differences and thresholds defined by pipdiff_percentage and SLTPRatio, the function classifies the trend as uptrend, downtrend, or no clear trend. These classifications could then be used to set target buy/sell prices in a trading strategy.</p><p><strong>Here’s the breakdown of the code provided:</strong></p><p>The provided code defines a function mytarget that aims to set target values (presumably for buying and selling) based on a trend classification. Here&#39;s a breakdown of its functionality:</p><p><strong>Parameters:</strong></p><ul><li>barsupfront (integer): The number of bars to look ahead from the current bar for trend classification.</li><li>df1 (pandas DataFrame): The DataFrame containing OHLC (Open, High, Low, Close) prices.</li></ul><p><strong>Function Logic:</strong></p><ol><li><strong>Initialization:</strong></li></ol><ul><li>It retrieves the length of the DataFrame (length).</li><li>It extracts lists of high, low, close, and open prices (high, low, close, open_). Note that open is renamed to open_ to avoid conflicts with Python&#39;s built-in open function.</li><li>It initializes a list trendcat with length elements, all set to None, which will eventually hold the trend category (uptrend, downtrend, or no trend) for each bar.</li></ul><p><strong>2. Trend Classification Loop:</strong></p><ul><li>The code iterates through the DataFrame, starting from the barsupfront-th bar to the second-last bar (length - barsupfront - 2).</li><li>Inside the loop:</li><li>It calculates two values:</li><li>valueOpenLow: Maximum difference between the open price at the current bar and the low prices in the next barsupfront + 1 bars.</li><li>valueOpenHigh: Minimum difference between the open price at the current bar and the high prices in the next barsupfront + 1 bars.</li><li>It checks these values against thresholds based on pipdiff_percentage (a percentage of the asset&#39;s price) and SLTPRatio:</li><li>If valueOpenLow is greater than or equal to close[line + 1] * pipdiff_percentage (meaning the open is significantly lower than some of the upcoming lows) AND -valueOpenHigh is less than or equal to close[line + 1] * pipdiff_percentage / SLTPRatio (meaning the open is not significantly higher than some of the upcoming highs), it classifies the trend as downtrend (trendcat[line] is set to 2).</li><li>Conversely, if valueOpenLow is less than or equal to close[line + 1] * pipdiff_percentage / SLTPRatio (meaning the open is significantly higher than some of the upcoming lows) AND -valueOpenHigh is greater than or equal to close[line + 1] * pipdiff_percentage (meaning the open is not significantly lower than some of the upcoming highs), it classifies the trend as uptrend (trendcat[line] is set to 1).</li><li>If neither condition is met, it marks no clear trend (trendcat[line] remains 0).</li></ul><p><strong>3. Return:</strong></p><ul><li>The function returns the trendcat list containing the trend classification for each bar (except the first barsupfront bars).</li><li>pen_spark</li></ul><pre>#!!! pitfall one category high frequency<br>df[&#39;Target&#39;] = mytarget(2, df)<br>df[&#39;Target&#39;] = df[&#39;Target&#39;].shift(1)<br>#df.tail(20)<br>df.replace([np.inf, -np.inf], np.nan, inplace=True)<br>df.dropna(axis=0, inplace=True)<br><br># Convert columns to integer type<br>df = df.astype(int)<br>#df[&#39;Target&#39;] = df[&#39;Target&#39;].astype(int)<br>df[&#39;Target&#39;].hist()<br><br>count_of_twos_target = df[&#39;Target&#39;].value_counts().get(2, 0)<br>count_of_zeros_target = df[&#39;Target&#39;].value_counts().get(0, 0)<br>count_of_ones_target = df[&#39;Target&#39;].value_counts().get(1, 0)<br>percent_of_zeros_over_ones_and_twos = (100 - (count_of_zeros_target/ (count_of_zeros_target + count_of_ones_target + count_of_twos_target))*100)<br>print(f&#39; count_of_zeros = {count_of_zeros_target}\n count_of_twos_target = {count_of_twos_target}\n count_of_ones_target={count_of_ones_target}\n percent_of_zeros_over_ones_and_twos = {round(percent_of_zeros_over_ones_and_twos,2)}%&#39;)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/373/1*hnmezwvqGGgIlWUVFCut6Q.png" /><figcaption>output of the above code</figcaption></figure><p>After assigning trend classifications (Target) based on the mytarget function, the code performs data cleaning by handling infinities and removing rows with missing values. It then analyzes the distribution of target values using a histogram and calculates the proportion of bars classified as each trend category. This helps assess the balance between clear uptrends, downtrends, and periods with no clear trend in the data.</p><p><strong>1. Assigning Target Values and Shifting:</strong></p><ul><li>The code assigns the output of mytarget(2, df) (presumably trend classifications) to the &#39;Target&#39; column (df[&#39;Target&#39;] = mytarget(2, df)).</li><li>It then shifts the &#39;Target&#39; values by one position upwards (df[&#39;Target&#39;] = df[&#39;Target&#39;].shift(1)) because the trend classification is based on future price movements. This means the target value for bar n is based on the trend classification for bar n-1.</li></ul><p><strong>2. Handling Infinities and Missing Values:</strong></p><ul><li>The code replaces positive and negative infinity (np.inf and -np.inf) with NaN (Not a Number) values in the DataFrame (df.replace([np.inf, -np.inf], np.nan, inplace=True)). This is necessary because some mathematical operations cannot handle infinities.</li><li>It then removes rows with missing values (NaN) from the DataFrame (df.dropna(axis=0, inplace=True)) to ensure clean data for further analysis.</li></ul><p><strong>3. Converting Data Types (Commented Out):</strong></p><ul><li>The line df = df.astype(int) is commented out. This line would attempt to convert all columns in the DataFrame to integers. However, since the &#39;Target&#39; column likely contains categorical values (1, 2, or 0), converting it to integer might not be meaningful. You&#39;d typically only convert numerical columns to integers if necessary for calculations.</li></ul><p><strong>4. Analyzing Target Distribution:</strong></p><ul><li>The code plots a histogram of the &#39;Target&#39; column (df[&#39;Target&#39;].hist()). This helps visualize the distribution of target values (uptrend, downtrend, or no trend) across the data.</li><li>It then calculates the counts of each target value (1, 2, and 0) using value_counts().</li><li>Finally, it calculates the percentage of bars classified as “no trend” relative to the sum of bars classified as uptrend and downtrend (percent_of_zeros_over_ones_and_twos). This provides insights into the balance between clear trends and unclear trends in the data.</li></ul><p>This code segment effectively calculates target categories based on predefined criteria and provides insights into the distribution of these categories within the dataset.</p><h3>Checking if the above Code is Giving Best Possible Returns for the “Target” Data Created:</h3><pre># Check for NaN values:<br>has_nan = df[&#39;Target&#39;].isnull().values.any()<br>print(&quot;NaN values present:&quot;, has_nan)<br><br># Check for infinite values:<br>has_inf = df[&#39;Target&#39;].isin([np.inf, -np.inf]).values.any()<br>print(&quot;Infinite values present:&quot;, has_inf)<br><br># Count the number of NaN and infinite values:<br>nan_count = df[&#39;Target&#39;].isnull().sum()<br>inf_count = (df[&#39;Target&#39;] == np.inf).sum() + (df[&#39;Target&#39;] == -np.inf).sum()<br>print(&quot;Number of NaN values:&quot;, nan_count)<br>print(&quot;Number of infinite values:&quot;, inf_count)<br><br># Get the indices of NaN and infinite values:<br>nan_indices = df[&#39;Target&#39;].index[df[&#39;Target&#39;].isnull()]<br>inf_indices = df[&#39;Target&#39;].index[df[&#39;Target&#39;].isin([np.inf, -np.inf])]<br>print(&quot;Indices of NaN values:&quot;, nan_indices)<br>df[&#39;Target&#39;]<br><br>df = df.reset_index(inplace=False)<br>df[&#39;Date&#39;] = pd.to_datetime(df[&#39;Date&#39;])<br>df.set_index(&#39;Date&#39;, inplace=True)<br><br>def SIGNAL(df):<br>    return df[&#39;Target&#39;]<br><br>from backtesting import Strategy<br><br>class MyCandlesStrat(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1 = self.I(SIGNAL, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        if self.signal1 == 1:<br>            sl_pct = 0.025  # 2.5% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif self.signal1 == 2:<br>            sl_pct = 0.025  # 2.5% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)<br>            <br>bt = Backtest(df, MyCandlesStrat, cash=100000, commission=.001, exclusive_orders = True)<br>stat = bt.run()<br>stat</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/368/1*0kb5rR7CnLf7tani_mNMLQ.png" /><figcaption>output of above code</figcaption></figure><ol><li><strong>Checking for Missing and Infinite Values:</strong></li></ol><ul><li>The code checks for the presence of NaN (Not a Number) and infinite values in the &#39;Target&#39; column (df[&#39;Target&#39;]).</li><li>It then counts the number of occurrences and retrieves the indices of these values.</li><li>These checks are crucial because backtesting libraries typically cannot handle missing or infinite values in signals.</li></ul><p><strong>2. Backtesting Framework Setup:</strong></p><ul><li>The code defines a function SIGNAL(df) that simply returns the &#39;Target&#39; column values. This function essentially provides the buy/sell signals based on the target classifications (1 for uptrend buy, 2 for downtrend sell).</li><li>It imports the Strategy class from the backtesting library.</li><li>It defines a custom strategy class MyCandlesStrat that inherits from Strategy.</li><li>The init method initializes an indicator named signal1 that holds the target values using the I function (presumably from backtesting).</li><li>The next method defines the trading logic:</li><li>If the signal1 is 1 (uptrend), it places a buy order with a stop-loss and take-profit based on percentages of the closing price.</li><li>If the signal1 is 2 (downtrend), it places a sell order with a stop-loss and take-profit based on percentages of the closing price.</li></ul><p><strong>3. Backtesting and Evaluation:</strong></p><ul><li>The code creates a Backtest object using the backtesting library. It provides the DataFrame (df), the strategy class (MyCandlesStrat), initial capital (cash), commission rate (commission), and sets exclusive_orders to True (potentially to prevent overlapping orders).</li><li>It runs the backtest using the bt.run() method and stores the results in the stat variable.</li></ul><p><strong>Does this code definitively determine the effectiveness of the target values?</strong></p><p>No, this code doesn’t definitively determine the effectiveness of the target values. Here’s why:</p><ul><li><strong>Parameter Optimization:</strong> The stop-loss and take-profit percentages (sl_pct and tp_pct) are fixed in the code. Optimizing these parameters for the specific strategy and market conditions could potentially improve performance.</li><li><strong>Single Backtest Run:</strong> Running the backtest only once doesn’t account for the inherent randomness in financial markets. Ideally, you’d run the backtest multiple times with different random seeds to assess its robustness.</li></ul><p><strong>How to improve the code for target evaluation?</strong></p><ul><li><strong>Calculate Performance Metrics:</strong> Modify the code to calculate and print relevant performance metrics like Sharpe Ratio, drawdown, and total profit after the backtest run.</li><li><strong>Optimize Stop-Loss and Take-Profit:</strong> Implement a parameter optimization process to find the best stop-loss and take-profit values for the strategy using the target signals.</li><li><strong>Multiple Backtest Runs:</strong> Run the backtest with different random seeds (e.g., using a loop) and analyze the distribution of performance metrics to assess the strategy’s consistency.</li></ul><p>By incorporating these improvements, wecan gain a more comprehensive understanding of how well the target values from the mytarget function perform in a backtesting framework. Remember, backtesting results are not guarantees of future performance, so real-world testing with a smaller capital allocation is essential before deploying a strategy with real money.</p><h3>Scaling and splitting the dataframe for training and testing:</h3><pre>scaler = MinMaxScaler(feature_range=(0,1))<br><br>df_model = df.copy()<br># Split into Learning (X) and Target (y) Data<br>X = df_model.iloc[:, : -1]<br>y = df_model.iloc[:, -1]<br><br>X_scaled = scaler.fit_transform(X)<br><br># Define a function to reshape the data<br>def reshape_data(data, time_steps):<br>    samples = len(data) - time_steps + 1<br>    reshaped_data = np.zeros((samples, time_steps, data.shape[1]))<br>    for i in range(samples):<br>        reshaped_data[i] = data[i:i + time_steps]<br>    return reshaped_data<br><br># Reshape the scaled X data<br>time_steps = 1  # Adjust the number of time steps as needed<br>X_reshaped = reshape_data(X_scaled, time_steps)<br><br># Now X_reshaped has the desired three-dimensional shape: (samples, time_steps, features)<br># Each sample contains scaled data for a specific time window<br><br># Align y with X_reshaped by discarding excess target values<br>y_aligned = y[time_steps - 1:]  # Discard the first (time_steps - 1) target values<br><br>X = X_reshaped<br>y = y_aligned<br><br>print(len(X),len(y))<br><br># Split data into train and test sets (considering time series data)<br>X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=False)<br></pre><p><strong>1. Data Preparation:</strong></p><ul><li><strong>Copying Data:</strong> It creates a copy of the original DataFrame (df_model = df.copy()) to avoid modifying the original data.</li></ul><p><strong>2. Splitting Features and Target:</strong></p><ul><li><strong>Separating Features (X) and Target (y):</strong> It separates the features (all columns except the last) and the target variable (the last column) using slicing (X = df_model.iloc[:, : -1], y = df_model.iloc[:, -1]).</li></ul><p><strong>3. Scaling Features:</strong></p><ul><li><strong>MinMaxScaler:</strong> It creates a MinMaxScaler object to scale the features between 0 and 1 (scaler = MinMaxScaler(feature_range=(0,1))). This can be helpful for some machine learning algorithms that work better with normalized data.</li><li><strong>Scaling X:</strong> It scales the feature data (X) using the fit_transform method of the scaler (X_scaled = scaler.fit_transform(X)).</li></ul><p><strong>4. Reshaping Data (Windowing):</strong></p><ul><li><strong>Reshape Function:</strong> It defines a function reshape_data that takes the data and the number of time steps (time_steps) as input.</li><li>This function iterates through the data with a sliding window of time_steps and creates a new 3D array (reshaped_data).</li><li>Each element in the new array represents a sample, containing a sequence of time_steps data points for each feature.</li><li><strong>Reshaping Scaled X:</strong> It defines the number of time steps (time_steps) and reshapes the scaled feature data (X_scaled) using the reshape_data function (X_reshaped = reshape_data(X_scaled, time_steps)).</li><li>This step transforms the data into a format suitable for time series forecasting models that require sequences of past observations to predict future values.</li></ul><p><strong>5. Aligning Target with Reshaped Data:</strong></p><ul><li><strong>Discarding Excess Target Values:</strong> Since the reshaped data (X_reshaped) considers a window of time_steps, the corresponding target values need an adjustment. It discards the first time_steps - 1 target values from y to align with the reshaped data (y_aligned = y[time_steps - 1:]).</li></ul><p><strong>6. Final Splitting (Train-Test):</strong></p><ul><li><strong>Train-Test Split:</strong> It splits the reshaped features (X) and aligned target (y) into training and testing sets using train_test_split from scikit-learn (X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=False)).</li><li>It sets test_size=0.3 to allocate 30% of the data for testing and shuffle=False because shuffling data in time series can disrupt the temporal order.</li></ul><p><strong>Overall, this code effectively addresses key aspects of data preparation for time series forecasting models:</strong></p><ul><li>Scaling features to a common range can improve model performance for some algorithms.</li><li>Reshaping data into a 3D structure with time steps allows models to learn from sequences of past observations.</li><li>Aligning the target variable with the reshaped data ensures the model predicts for the correct time steps.</li><li>Splitting data into training and testing sets with shuffle=False preserves the temporal order for time series forecasting.</li></ul><p><strong>Additional Considerations:</strong></p><ul><li>The choice of scaler (MinMaxScaler, StandardScaler, etc.) might depend on the specific model and data characteristics.</li><li>You might explore different window sizes (time_steps) to see how they affect model performance.</li><li>Techniques like stationarity checks and differencing might be necessary for certain time series data before applying these steps.</li></ul><h3>Transformer Model Manual Optimization</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cUlgoh3oHRGGLj8Q.png" /></figure><pre>from keras.layers import Input, Dense, Dropout<br>from keras.models import Model<br>from keras.optimizers import Adam<br>from keras.metrics import Precision, Recall<br>from keras_self_attention import SeqSelfAttention<br>from keras.utils import to_categorical<br>from tensorflow.keras.layers import MultiHeadAttention<br><br>class_weights = {0: 3.33, 1: 3.33, 2: 3.34}  # Adjust weights as needed<br><br># Define Transformer-based model with multiple hidden layers<br>def build_transformer_model(input_shape, units=193, dropout=0.2, lr=0.0001):<br>    inputs = Input(shape=input_shape)<br>    # attention = MultiHeadAttention(num_heads=6, key_dim=80)(inputs, inputs)<br>    attention = MultiHeadAttention(num_heads=6, key_dim=64)(inputs, inputs)<br>    hidden = Dense(units, activation=&#39;relu&#39;)(attention)<br>    dropout_layer = Dropout(dropout)(hidden)<br>    <br>    # First hidden layer<br>    dense_layer_1 = Dense(units=96, activation=&#39;relu&#39;)(dropout_layer)  # 96, 48, 24 - 70% return <br>    dropout_layer_1 = Dropout(dropout)(dense_layer_1)<br>    <br>    # Second hidden layer<br>    dense_layer_2 = Dense(units=48, activation=&#39;relu&#39;)(dropout_layer_1)<br>    dropout_layer_2 = Dropout(dropout)(dense_layer_2)<br>    <br>    # Third hidden layer<br>    dense_layer_3 = Dense(units=12, activation=&#39;relu&#39;)(dropout_layer_2)<br>    dropout_layer_3 = Dropout(dropout)(dense_layer_3)<br>    <br>    # Output layer<br>    outputs = Dense(3, activation=&#39;softmax&#39;)(dropout_layer_3)<br>    <br>    model = Model(inputs=inputs, outputs=outputs)<br>    optimizer = Adam(learning_rate=lr)<br>    model.compile(optimizer=optimizer, loss=&#39;sparse_categorical_crossentropy&#39;, metrics=[&#39;accuracy&#39;])<br>    return model<br><br># Convert y_train to one-hot encoded format<br>y_train_one_hot = to_categorical(y_train, num_classes=3)<br><br># Instantiate the model<br>model_transformer = build_transformer_model(input_shape=(X_train.shape[1], X_train.shape[2]))<br><br># Fit the model to the training data<br># model_transformer.fit(X_train, y_train, epochs=50, batch_size=18, validation_split=0.2, verbose=1, class_weight=class_weights)<br>model_transformer.fit(X_train, y_train, epochs=50, batch_size=18, validation_split=0.2, verbose=1)<br></pre><p>This code defines and trains a Transformer-based model for classifying ETH price movements into three categories: neutral (0), long (1), and short (2). Here’s a breakdown:</p><p><strong>1. Imports:</strong></p><ul><li>Keras libraries for building and training the model (layers, models, optimizers, metrics).</li><li>keras_self_attention library for the SeqSelfAttention layer (might be deprecated, replaced by MultiHeadAttention from TensorFlow).</li><li>tensorflow.keras.layers for MultiHeadAttention.</li><li>to_categorical from keras.utils for converting class labels to one-hot encoded format.</li></ul><p><strong>2. Class Weights (Optional):</strong></p><ul><li>Defines class weights (class_weights) to address potential class imbalance (unequal distribution of samples across classes). Higher weights are assigned to less frequent classes for the model to prioritize them during training.</li></ul><p><strong>3. Model Building Function (</strong><strong>build_transformer_model):</strong></p><ul><li>Takes input_shape (number of features and time steps), units (number of hidden neurons), dropout rate, and lr (learning rate) as arguments.</li><li>Defines the model architecture:</li><li><strong>Input:</strong> Takes data of the specified input_shape.</li><li><strong>MultiHeadAttention:</strong> This layer is the core of the Transformer. It allows the model to focus on relevant parts of the input sequence for each time step, capturing relationships between data points.</li><li>num_heads defines the number of parallel attention heads, allowing the model to learn different representations of the input. (Commented out SeqSelfAttention can be replaced with this layer).</li><li>key_dim defines the dimension of the key and value vectors used for attention calculations.</li><li><strong>Hidden Layers:</strong> Three dense layers with ReLU activation and dropout layers for regularization (preventing overfitting). The number of units in each layer (units, 96, 48, 12) defines the model&#39;s complexity.</li><li><strong>Output Layer:</strong> A dense layer with 3 units and softmax activation for predicting probabilities of the three classes (neutral, long, short).</li><li>Compiles the model with Adam optimizer, sparse categorical cross-entropy loss function for multi-class classification, and accuracy metric.</li></ul><p><strong>4. Data Preprocessing (Not Shown):</strong></p><ul><li>Assumes X_train and y_train represent your training data for features and target labels, respectively.</li><li>y_train_one_hot converts the target labels to one-hot encoded format (necessary for training).</li></ul><p><strong>5. Model Training (Not Shown):</strong></p><ul><li>Creates an instance of the build_transformer_model with the desired input shape.</li><li>Fits the model to X_train and y_train_one_hot for a specified number of epochs (iterations), batch size, and validation split.</li><li>You can uncomment the class_weight argument in model_transformer.fit to use the defined class weights.</li></ul><h4>How Transformers Work in Time Series Classification with 0, 1, 2 Labels</h4><ol><li><strong>Input:</strong> The model takes a sequence of features (e.g., past closing prices, technical indicators) for each time step as input.</li><li><strong>Multi-Head Attention:</strong> This layer allows the model to attend to different parts of the input sequence for each time step. It learns multiple “heads” (representations) of the data, enabling it to capture complex relationships between past data points and the predicted class (neutral, long, short).</li><li><strong>Hidden Layers:</strong> These layers process the information from the attention layer, extracting higher-level features and learning a mapping from the input features to the class probabilities.</li><li><strong>Output Layer:</strong> The final layer predicts the probabilities of the three classes (neutral, long, short) using the softmax activation function. The class with the highest probability is the predicted position for the ETH price movement.</li></ol><p><strong>Key Points:</strong></p><ul><li>Transformers excel at capturing long-range dependencies in time series data, making them suitable for tasks like price movement prediction.</li><li>The MultiHeadAttention layer plays a crucial role in allowing the model to focus on relevant past information for each prediction.</li><li>The 0, 1, 2 labels represent the three classes: neutral (0) for no significant price movement, long (1) for an upward trend, and short (2) for a downward trend.</li></ul><p><strong>Additional Notes:</strong></p><ul><li>The provided code might require adjustments based on your specific data and desired performance. Hyperparameter tuning (e.g., number of units, dropout rate, learning rate) is crucial for optimizing the model.</li><li>Consider using techniques like normalization or standardization for your features to improve model performance.</li></ul><p><strong>Limitations and Considerations:</strong></p><ul><li><strong>Data Requirements:</strong> Transformers often require a large amount of training data to learn effectively. If your dataset is limited, consider using simpler models or techniques like LSTMs (Long Short-Term Memory) that might perform well with less data.</li><li><strong>Computational Cost:</strong> Training Transformer models can be computationally expensive, especially with large datasets and complex architectures. This might require powerful GPUs for faster training.</li><li><strong>Interpretability:</strong> While Transformers are powerful, they can be less interpretable than simpler models. Understanding which features contribute most to the prediction can be challenging. Consider using techniques like Layer-wise Relevance Propagation (LRP) or visualizing attention weights to gain insights into the model’s decision-making process.</li></ul><p><strong>Further Exploration:</strong></p><ul><li>Experiment with different hyperparameters (number of layers, units, attention heads) to find the best configuration for your data and task.</li><li>Explore other Transformer architectures like convolutional transformers or recurrent transformers that might be better suited for specific time series applications.</li><li>Consider incorporating additional features like technical indicators or fundamental data points to potentially improve the model’s prediction accuracy.</li><li>Evaluate the model’s performance using various metrics like precision, recall, F1-score, or a custom metric based on your specific trading strategy.</li></ul><p><strong>Real-World Considerations:</strong></p><ul><li>Financial markets are complex and influenced by various factors. Past price movements don’t guarantee future performance.</li><li>Use the model predictions as a guide, not a definitive signal. Consider risk management strategies and other factors before making trading decisions.</li><li>Backtest your model on historical data to assess its performance in different market conditions.</li></ul><pre>from sklearn.metrics import confusion_matrix<br>import matplotlib.pyplot as plt<br>import seaborn as sns<br><br># # Reshape X_train and X_test back to their original shapes<br># X_train_original_shape = X_train.reshape(X_train.shape[0], -1)<br># X_test_original_shape = X_test.reshape(X_test.shape[0], -1)<br><br># X_test_reshaped = X_test_original_shape.reshape(-1, 1, X_test_original_shape.shape[1])<br><br><br># Now X_train_original_shape and X_test_original_shape have their original shapes<br><br># Perform prediction on the original shape data<br># y_pred = model.predict(X_test_reshaped)<br>y_pred = model_transformer.predict(X_test)<br><br><br># Perform any necessary post-processing on y_pred if needed<br># For example, if your model outputs probabilities, you might convert them to class labels using argmax:<br><br>y_pred_classes = np.argmax(y_pred, axis=2)<br><br># Convert one-hot encoded y_test to class labels<br>y_test_classes = y_test<br><br># Plot confusion matrix for test data<br>conf_matrix_test = confusion_matrix(y_test_classes, y_pred_classes)<br><br># Plot confusion matrix<br>plt.figure(figsize=(8, 6))<br>sns.heatmap(conf_matrix_test, annot=True, cmap=&#39;Blues&#39;, fmt=&#39;g&#39;, cbar=False)<br>plt.xlabel(&#39;Predicted labels&#39;)<br>plt.ylabel(&#39;True labels&#39;)<br>plt.title(&#39;Confusion Matrix - Test Data&#39;)<br>plt.show()<br><br>from sklearn.metrics import classification_report<br><br># Generate classification report for test data<br>class_report = classification_report(y_test, y_pred_classes)<br><br># Print classification report<br>print(&quot;Classification Report - Test Data:\n&quot;, class_report)<br></pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/476/1*HN0TzxL2qqxloCn2-cGCpQ.png" /><figcaption>output of the above code</figcaption></figure><p><strong>1. Imports:</strong></p><ul><li>confusion_matrix from sklearn.metrics for calculating the confusion matrix.</li><li>matplotlib.pyplot (plt) and seaborn (sns) for creating the confusion matrix visualization.</li><li>classification_report from sklearn.metrics for generating a classification report.</li></ul><p><strong>2. Reshaping Data (Commented Out):</strong></p><ul><li>The commented section addresses potential reshaping issues. It’s important to ensure your test data (X_test) has the correct shape expected by the model for prediction.</li></ul><p><strong>3. Prediction:</strong></p><ul><li>y_pred = model_transformer.predict(X_test) performs predictions on the test data using your trained model.</li></ul><p><strong>4. Post-processing Predictions:</strong></p><ul><li>y_pred_classes = np.argmax(y_pred, axis=2) assumes your model outputs probabilities for each class (neutral, long, short). This line converts the probabilities to class labels by using argmax (finding the index of the maximum value) along axis 2.</li></ul><p><strong>5. Converting True Labels:</strong></p><ul><li>y_test_classes = y_test assumes your y_test data already contains class labels (0, 1, 2) for the test set.</li></ul><p><strong>6. Confusion Matrix:</strong></p><ul><li>conf_matrix_test = confusion_matrix(y_test_classes, y_pred_classes) calculates the confusion matrix for the test data. It shows how many samples from each true class were predicted into each class by the model.</li></ul><p><strong>7. Visualization:</strong></p><ul><li>The code creates a heatmap visualization of the confusion matrix using seaborn. This allows you to visually inspect how well the model classified each class. Ideally, you want to see high values on the diagonal, indicating correct classifications.</li></ul><p><strong>8. Classification Report:</strong></p><ul><li>class_report = classification_report(y_test, y_pred_classes) generates a classification report for the test data. This report provides metrics like precision, recall, F1-score, and support for each class, offering a more detailed breakdown of the model&#39;s performance.</li><li>pen_spark</li></ul><h4>Backtest with Test and Whole Data:</h4><pre>df_ens_test = df.copy() <br><br>df_ens = df_ens_test[len(X_train):]<br><br>df_ens[&#39;transformer_neural_scaled&#39;] =  np.argmax(model_transformer.predict(X_test), axis=2)<br><br>df_ens[&#39;trns&#39;] = df_ens[&#39;transformer_neural_scaled&#39;].shift(1).dropna().astype(int)<br><br>df_ens = df_ens.dropna()<br><br>df_ens[&#39;trns&#39;]<br><br># df_ens = df.copy() <br><br># # df_ens = df_ens_test[len(X_train):]<br><br># df_ens[&#39;transformer_neural_scaled&#39;] =  np.argmax(model_transformer.predict(X), axis=2)<br><br># df_ens[&#39;trns&#39;] = df_ens[&#39;transformer_neural_scaled&#39;].shift(-1).dropna().astype(int)<br><br># df_ens = df_ens.dropna()<br><br># df_ens[&#39;trns&#39;]<br><br>df_ens = df_ens.reset_index(inplace=False)<br>df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>df_ens.set_index(&#39;Date&#39;, inplace=True)<br><br>def SIGNAL_1(df_ens):<br>    return df_ens[&#39;trns&#39;]<br><br>class MyCandlesStrat_1(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1_1 = self.I(SIGNAL_1, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        if self.signal1_1 == 1:<br>            sl_pct = 0.055  # 10% stop-loss<br>            tp_pct = 0.055  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif self.signal1_1 == 2:<br>            sl_pct = 0.055  # 10% stop-loss<br>            tp_pct = 0.055  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)<br><br>            <br>bt_1 = Backtest(df_ens, MyCandlesStrat_1, cash=100000, commission=.001, exclusive_orders=False)<br>stat_1 = bt_1.run()<br>stat_1<br></pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/369/1*QqhGpm2bdbHE5gDdKsqxXw.png" /></figure><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>The provided code implements a backtesting strategy using our Transformer model predictions (df_ens[&#39;transformer_neural_scaled&#39;]) to generate buy and sell signals for ETH prices in a Pandas DataFrame (df_ens). Here&#39;s a breakdown of each step:</p><p><strong>1. Data Preparation (Outside the Code Block):</strong></p><ul><li>df_ens_test = df.copy(): Creates a copy of the original DataFrame (df).</li><li>df_ens = df_ens_test[len(X_train):]: Selects the data from the test set (after the training data). This ensures the model predictions are used on unseen data for backtesting.</li></ul><p><strong>2. Transformer Predictions:</strong></p><ul><li>df_ens[&#39;transformer_neural_scaled&#39;] = np.argmax(model_transformer.predict(X_test), axis=2): Makes predictions on the test data using your model_transformer and converts the class probabilities to predicted labels (0: neutral, 1: long, 2: short).</li></ul><p><strong>3. Signal Generation:</strong></p><ul><li>df_ens[&#39;trns&#39;] = df_ens[&#39;transformer_neural_scaled&#39;].shift(1).dropna().astype(int): This line creates the signal column (&#39;trns&#39;). It:</li><li>Shifts the predicted labels (&#39;transformer_neural_scaled&#39;) by 1 period (presumably to align the signal with the next price movement).</li><li>Uses .dropna() to remove rows with missing values (likely the first row due to the shift).</li><li>Converts the shifted labels to integers (0, 1, 2) using .astype(int).</li><li>This &#39;trns&#39; column essentially represents the predicted direction for the next price movement based on your model&#39;s classifications.</li></ul><p><strong>4. Data Cleaning (Optional — Commented Out):</strong></p><ul><li>The commented-out section (# df_ens = df.copy()...) seems like an alternative approach. It predicts on the entire DataFrame (X) and shifts the labels by -1. This might not be ideal as the model predictions are used on data it was trained on, leading to potential overfitting issues.</li></ul><p><strong>5. DataFrame Setup:</strong></p><ul><li>df_ens = df_ens.reset_index(inplace=False): Resets the index of the DataFrame to a numerical sequence.</li><li>df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;]): Converts the &#39;Date&#39; column to datetime format.</li><li>df_ens.set_index(&#39;Date&#39;, inplace=True): Sets the &#39;Date&#39; column as the index for the DataFrame.</li></ul><p><strong>6. Signal Function (Outside the Code Block):</strong></p><ul><li>def SIGNAL_1(df_ens):: Defines a function SIGNAL_1 that simply returns the &#39;trns&#39; column containing the predicted signals.</li></ul><p><strong>7. Backtesting Strategy Class (</strong><strong>MyCandlesStrat_1):</strong></p><ul><li>This class inherits from Strategy (presumably from a backtesting framework like Zipline or Backtrader).</li><li>def init(self):: In the initialization, it creates an indicator (self.signal1_1) that holds the SIGNAL_1 function applied to the data (self.data).</li><li>def next(self):: In the next function, which gets called for each bar in the backtesting process:</li><li>It checks the value of the signal1_1 indicator:</li></ul><p>If it’s 1 (predicted long position):</p><ul><li>It defines a stop-loss (SL) price 5.5% below the current closing price and a take-profit (TP) price 5.5% above the closing price.</li><li>It places a buy order with the defined SL and TP.</li></ul><p>If it’s 2 (predicted short position):</p><ul><li>It defines an SL price 5.5% above the current closing price and a TP price 5.5% below the closing price (reversed for short positions).</li><li>It places a sell order with the defined SL and TP.</li></ul><p><strong>8. Backtesting and Results:</strong></p><ul><li>bt_1 = Backtest(df_ens, MyCandlesStrat_1, cash=100000, commission=.001, exclusive_orders=False): Creates a backtest object (bt_1) using your DataFrame (df_ens), the strategy class (MyCandlesStrat_1), an initial cash amount (cash), a commission rate (commission), and sets exclusive_orders to False (allowing multiple orders per bar).</li><li>stat_1 = bt_1.run(): Runs the backtest and stores the results in stat_1.</li><li>stat_1: This variable likely contains the backtesting statistics generated</li></ul><pre>from keras.models import save_model<br><br># Define filename with specific details<br>filename = f&quot;./models/transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_{round(stat_1[&#39;Sharpe Ratio&#39;],2)}.keras&quot;<br><br># Save the model using the filename<br>save_model(model_transformer, filename)</pre><p><strong>Explanation:</strong></p><ol><li><strong>Import:</strong></li></ol><ul><li>save_model from keras.models is used to save the model.</li></ul><p><strong>2. Filename Definition:</strong></p><ul><li>The filename is constructed using an f-string (formatted string literal). It incorporates various details:</li><li>Path: ./models/: This specifies the directory where you want to save the model.</li><li>Model Name: transformer_model: Base name for the model.</li><li>Hyperparameters: _55sl_55tp: Likely indicates the stop-loss (SL) and take-profit (TP) values used in your backtesting strategy.</li><li>Data Info: _eth_15m: Possibly refers to the data being Ethereum (ETH) prices with a 15-minute time frame.</li><li>Date: _may_13th: The date the model was trained (May 13th).</li><li>Performance Metric: _ShRa_{round(stat_1[&#39;Sharpe Ratio&#39;],2)}: Appends the Sharpe Ratio from the backtesting results (stat_1), rounded to two decimal places.</li><li>File Extension: .keras: Standard extension for Keras models.</li></ul><p><strong>3. Saving the Model:</strong></p><ul><li>save_model(model_transformer, filename): This line saves your trained model_transformer to the specified file with the constructed filename.</li></ul><p><strong>Key Points:</strong></p><ul><li>This approach provides a clear and informative way to save our model, including details about its training parameters, data, and performance.</li><li>You can modify the filename structure to include additional information relevant to your needs.</li></ul><h4>Let’s Backtest entire data with saved model:</h4><pre>from keras.models import load_model<br><br># # Load the ensemble_predict function using joblib<br>best_model = load_model(&#39;./models/transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;)</pre><p><strong>Intended Functionality:</strong></p><ol><li><strong>Import:</strong></li></ol><ul><li>load_model from keras.models is used to load a saved model.</li></ul><p><strong>2. Loading the Model:</strong></p><ul><li>best_model = load_model(&#39;./models/transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;): This line attempts to load a model saved with the filename transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras from the directory ./models/.</li></ul><pre>df_ens = df.copy() <br><br># df_ens = df_ens_test[:len(X)]<br><br>y_pred = best_model.predict(X)<br><br><br># Perform any necessary post-processing on y_pred if needed<br># For example, if your model outputs probabilities, you might convert them to class labels using argmax:<br><br># y_pred_classes = np.argmax(y_pred, axis=1)<br># y_pred = np.argmax(y_pred, axis=1) # for lstm, tcn, cnn models<br>y_pred = np.argmax(y_pred, axis=2) # for transformers model<br><br>df_ens[&#39;best_model&#39;] =  y_pred<br><br>df_ens[&#39;bm&#39;] = df_ens[&#39;best_model&#39;].shift(1).dropna().astype(int)<br><br>df_ens[&#39;ema_22&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=22)<br>df_ens[&#39;ema_55&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=55)<br>df_ens[&#39;ema_108&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=108)<br><br>df_ens = df_ens.dropna()<br><br>df_ens[&#39;bm&#39;]<br><br>df_ens = df_ens.reset_index(inplace=False)<br>df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>df_ens.set_index(&#39;Date&#39;, inplace=True)<br><br>def SIGNAL_010(df_ens):<br>    return df_ens[&#39;bm&#39;]<br><br>def SIGNAL_0122(df_ens):<br>    return df_ens[&#39;ema_22&#39;]<br><br>def SIGNAL_0155(df_ens):<br>    return df_ens[&#39;ema_55&#39;]<br><br>def SIGNAL_01108(df_ens):<br>    return df_ens[&#39;ema_108&#39;]<br><br>class MyCandlesStrat_010(Strategy):  <br>    def init(self):<br>        super().init()<br>        self.signal1_1 = self.I(SIGNAL_010, self.data)<br>        self.ema_1_22 = self.I(SIGNAL_0122, self.data)<br>        self.ema_1_55 = self.I(SIGNAL_0155, self.data)<br>        self.ema_1_108 = self.I(SIGNAL_01108, self.data)<br>    <br>    def next(self):<br>        super().next() <br>        # if (self.signal1_1 == 1) and (self.data.Close &gt; self.ema_1_22) and (self.ema_1_22 &gt; self.ema_1_55) and (self.ema_1_55 &gt; self.ema_1_108):<br>        #     sl_pct = 0.025  # 10% stop-loss<br>        #     tp_pct = 0.025  # 2.5% take-profit<br>        #     sl_price = self.data.Close[-1] * (1 - sl_pct)<br>        #     tp_price = self.data.Close[-1] * (1 + tp_pct)<br>        #     self.buy(sl=sl_price, tp=tp_price)<br>        # elif (self.signal1_1 == 2)  and (self.data.Close &lt; self.ema_1_22) and (self.ema_1_22 &lt; self.ema_1_55) and (self.ema_1_55 &lt; self.ema_1_108):<br>        #     sl_pct = 0.025  # 10% stop-loss<br>        #     tp_pct = 0.025  # 2.5% take-profit<br>        #     sl_price = self.data.Close[-1] * (1 + sl_pct)<br>        #     tp_price = self.data.Close[-1] * (1 - tp_pct)<br>        #     self.sell(sl=sl_price, tp=tp_price)<br>            <br>    # def next(self):<br>    #     super().next() <br>    #     if (self.signal1_1 == 1) and (self.ema_1_22 &gt; self.ema_1_55) and (self.ema_1_55 &gt; self.ema_1_108):<br>    #         sl_pct = 0.025  # 10% stop-loss<br>    #         tp_pct = 0.025  # 2.5% take-profit<br>    #         sl_price = self.data.Close[-1] * (1 - sl_pct)<br>    #         tp_price = self.data.Close[-1] * (1 + tp_pct)<br>    #         self.buy(sl=sl_price, tp=tp_price)<br>    #     elif (self.signal1_1 == 2) and (self.ema_1_22 &lt; self.ema_1_55) and (self.ema_1_55 &lt; self.ema_1_108):<br>    #         sl_pct = 0.025  # 10% stop-loss<br>    #         tp_pct = 0.025  # 2.5% take-profit<br>    #         sl_price = self.data.Close[-1] * (1 + sl_pct)<br>    #         tp_price = self.data.Close[-1] * (1 - tp_pct)<br>    #         self.sell(sl=sl_price, tp=tp_price)<br>            <br>        if (self.signal1_1 == 1):<br>            sl_pct = 0.035  # 10% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 - sl_pct)<br>            tp_price = self.data.Close[-1] * (1 + tp_pct)<br>            self.buy(sl=sl_price, tp=tp_price)<br>        elif (self.signal1_1 == 2):<br>            sl_pct = 0.035  # 10% stop-loss<br>            tp_pct = 0.025  # 2.5% take-profit<br>            sl_price = self.data.Close[-1] * (1 + sl_pct)<br>            tp_price = self.data.Close[-1] * (1 - tp_pct)<br>            self.sell(sl=sl_price, tp=tp_price)<br><br>            <br>bt_010 = Backtest(df_ens, MyCandlesStrat_010, cash=100000, commission=.001)<br>stat_010 = bt_010.run()<br>stat_010</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/369/1*Apujlc2269l4KIzxUWySdA.png" /><figcaption>33885%+ returns for ETh in 1022 days using Neural Networks Transformers Model with VishvaAlgo</figcaption></figure><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>This code builds upon your previous strategy by incorporating a Transformer model prediction (&#39;best_model&#39;) along with Exponential Moving Averages (EMAs) to generate buy and sell signals for a backtesting strategy. Here&#39;s a breakdown:</p><p><strong>1. Data Preparation:</strong></p><ul><li>df_ens = df.copy(): Creates a copy of the original DataFrame (df).</li><li>y_pred = best_model.predict(X): Makes predictions on the entire DataFrame (X) using your loaded Transformer model (best_model).</li><li>df_ens[&#39;best_model&#39;] = y_pred: Adds a new column &#39;best_model&#39; to the DataFrame containing the model predictions.</li><li>df_ens[&#39;bm&#39;] = df_ens[&#39;best_model&#39;].shift(1).dropna().astype(int): Similar to before, this creates a shifted signal column &#39;bm&#39; based on the predicted labels, but here it might include predictions for the entire DataFrame.</li><li>df_ens[&#39;ema_22&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=22): Calculates the 22-period EMA for the &#39;Close&#39; price and adds it as a new column &#39;ema_22&#39;.</li><li>df_ens[&#39;ema_55&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=55): Similar to above, calculates the 55-period EMA and adds it as &#39;ema_55&#39;.</li><li>df_ens[&#39;ema_108&#39;] = ta.EMA(df_ens[&#39;Close&#39;], timeperiod=108): Calculates the 108-period EMA and adds it as &#39;ema_108&#39;.</li><li>df_ens = df_ens.dropna(): Removes rows with missing values (likely the first row due to shifting).</li></ul><p><strong>2. Signal Functions (Outside the Code Block):</strong></p><ul><li>These functions (SIGNAL_010, SIGNAL_0122, etc.) simply return the corresponding columns from the DataFrame (&#39;bm&#39;, &#39;ema_22&#39;, etc.) used for generating the signals.</li></ul><p><strong>3. Backtesting Strategy Class (</strong><strong>MyCandlesStrat_010):</strong></p><ul><li>Inherits from Strategy.</li><li>def init(self): Initializes indicators for the Transformer model predictions (self.signal1_1) and EMAs (self.ema_1_22, etc.).</li></ul><p><strong>4. Backtesting Logic (in </strong><strong>next function):</strong></p><ul><li>The commented-out section shows a more complex logic considering the relationship between the Transformer predictions and the EMAs for buy/sell decisions.</li><li>The current active section uses a simpler approach:</li><li>If self.signal1_1 (Transformer prediction) is 1 (long):</li><li>Buy with stop-loss (SL) at 3.5% below current close and take-profit (TP) at 2.5% above.</li><li>If self.signal1_1 is 2 (short):</li><li>Sell with SL at 3.5% above current close and TP at 2.5% below.</li></ul><p><strong>5. Backtesting and Results:</strong></p><ul><li>bt_010 = Backtest(df_ens, MyCandlesStrat_010, cash=100000, commission=.001): Creates a backtest object using the DataFrame, strategy class, and other parameters.</li><li>stat_010 = bt_010.run(): Runs the backtest and stores the results in stat_010.</li><li>stat_010: This variable likely contains the backtesting statistics you can analyze.</li></ul><p><strong>Key Points:</strong></p><ul><li>This strategy combines predictions from our Transformer model with technical indicators (EMAs) for generating signals.</li><li>You can experiment with different conditions in the next function to create more sophisticated trading strategies.</li><li>Remember that backtesting results may not guarantee future performance, and proper risk management is crucial for real-world trading</li></ul><h4><strong>Conclusion for Transformers Model:</strong></h4><p>Transformers offer a powerful approach for classifying time series data like ETH price movements. Understanding the core principles of attention mechanisms and how they are used in the model can help you evaluate its predictions and make informed trading decisions. Remember that effective trading strategies require a combination of technical analysis, fundamental analysis, and risk management.</p><h3>Applying neural network Transformers Model for Other Assets and Short List the Best:</h3><p>From here on we will explain about how to use the same trained model to short list best assets after doing certain backtest on all the assets after downloading the data from tradingview for backtest</p><h4>Importing Necessary packages and setting up Model &amp; Exchange APi with CCXT</h4><pre>import time<br>import logging<br>import io<br>import contextlib<br>import glob<br>import ccxt<br>from datetime import datetime, timedelta, timezone<br>import keras<br>from keras.models import save_model, load_model<br>import numpy as np<br>import pandas as pd<br>import talib as ta<br>from sklearn.preprocessing import MinMaxScaler<br>import warnings<br>from threading import Thread, Event<br>import decimal<br>import joblib<br>from tcn import TCN<br><br># from pandas.core.computation import PerformanceWarning<br><br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br><br># NOTE: Train your own model from the other notebook I have shared and use the most successful trained model here.<br><br># model_file_path = &#39;./model_lstm_1tp_1sl_2p5SlTp_April_5th_ShRa_1_49_15m.hdf5&#39;<br>model_file_path = &#39;./models/transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;<br>model_name = model_file_path.split(&#39;/&#39;)[-1]<br><br>##################################### TO Load A Model #######################################<br><br># NOTE: for LSTM based neural network model  you can directly load_model with model_file_path as given below<br># Load your pre-trained model, keras trained model will only take load_model from keras.models and not from joblib<br><br>model = load_model(model_file_path)<br># # or<br># model = tf.keras.models.load_model(model_file_path)<br><br># NOTE: for TCN based neural network model, you need to add custom_objects while loading the model, it is given below<br># # Define a dictionary to specify custom objects<br><br># custom_objects = {&#39;TCN&#39;: TCN}<br># model = load_model(model_file_path, custom_objects = custom_objects)<br><br><br>##########################################################################################<br><br>########################## Adding the exchange information ##############################<br><br>exchange = ccxt.binanceusdm(<br>    {<br>        &#39;enableRateLimit&#39;: True,  # required by the Manual<br>        # Add any other authentication parameters if needed<br>        &#39;rateLimit&#39;: 250, &#39;verbose&#39;: True<br>    }<br>    )<br><br># NOTE: I used https://testnet.binancefuture.com/en/futures/BTCUSDT for testnet API (this has very bad liquidity issue for various assets and many other issues but can be used for purely testiug purpose)<br>#  kraken testnet creds pubkey - K9dS2SK8JURMl9F300lguUhOS/ao3HM+tfRMgJGed+JhDfpJhvsC/y           privatekey - /J/03PPyPwsrPsKZYtLqOQNPLKZJattT6i15Bpg14/6ALokHHY/MBb1p6tYKyFgkKXIJIOMbBsFRfL3aBZUvQ1<br><br># api_key = &#39;8f7080f8821b58a53f5c49f00cbff7fdcce1cca9c9154ea&#39;<br># secret_key = &#39;1e58391a46a7dbb098aa5121d3e69e3a6660ba8c38f&#39;<br><br><br># exchange.apiKey = api_key<br># exchange.secret = secret_key<br># exchange.set_sandbox_mode(True)<br><br><br># NOTE: if u want to go live, un commenb below 5 lines and comment 5 lines above and change to your own api_key and secret_key (below one ius a dummy and also make sure to give &quot;futres&quot; permission while creating your api in the exchange)<br><br>api_key = &#39;CxUdC80c3Y5Nf1iRJMZJelOCfFJWISbQsasPraCb4Zdskx7MM8uCl&#39;<br>secret_key = &#39;p4XwsZwmmNswzDHzE5TSUOgXT5tASArfSO0pxfYrBMtezlCpDGtz&#39;<br><br>exchange.apiKey = api_key<br>exchange.secret = secret_key<br>exchange.set_sandbox_mode(False)<br>#######################################################################################<br><br>    # exchange.set_sandbox_mode(True)<br>exchange.has<br># exchange.fetchBalance()[&quot;info&quot;][&quot;assets&quot;]<br><br>exchange.options = {&#39;defaultType&#39;: &#39;future&#39;, # or &#39;margin&#39; or &#39;spot&#39;<br>                    &#39;timeDifference&#39;: 0,  # Set an appropriate initial value for time difference<br>                        &#39;adjustForTimeDifference&#39;: True,<br>                        &#39;newOrderRespType&#39;: &#39;FULL&#39;,<br>                        &#39;defaultTimeInForce&#39;: &#39;GTC&#39;}<br><br></pre><p>The provided code snippet demonstrates how to load our trained model and connect to a cryptocurrency exchange (Binance) for potential shortlisting of assets based on backtesting. Here’s a breakdown:</p><p><strong>Imports:</strong></p><ul><li>Standard libraries for time, logging, data manipulation (pandas, numpy), machine learning (Keras, scikit-learn), technical indicators (talib), threading, and others.</li></ul><p><strong>Model Loading:</strong></p><ul><li>Comments explain the difference in loading a model based on its type:</li><li><strong>LSTM Model:</strong> Uses load_model from keras.models directly (as shown in your code).</li><li><strong>TCN Model:</strong> Requires specifying custom objects (custom_objects={&#39;TCN&#39;: TCN}) during loading.</li></ul><p><strong>Exchange Connection:</strong></p><ul><li>Creates a ccxt.binanceusdm object (exchange) to interact with the Binance exchange.</li><li>Sets API credentials and enables rate limiting for responsible API usage.</li><li>Comments mention testnet and live API usage options.</li></ul><p><strong>Important Notes:</strong></p><ul><li><strong>Replace API Keys:</strong> Replace the dummy api_key and secret_key with your actual Binance API credentials (if going live). Ensure your API has &quot;futures&quot; permission.</li><li><strong>Backtesting Not Shown:</strong> This code focuses on model loading and exchange connection. The actual backtesting loop and asset shortlisting logic are not included.</li></ul><p><strong>Next Steps:</strong></p><ol><li><strong>Backtesting Loop:</strong> You’ll need to implement a loop to iterate through your desired assets:</li></ol><ul><li>Download historical data from the exchange (using exchange.fetch_ohlcv) for each asset.</li><li>Preprocess the data (scaling, feature engineering).</li><li>Make predictions using your loaded model (model.predict).</li><li>Apply your backtesting strategy (similar to previous examples) incorporating predictions and potentially technical indicators.</li><li>Store backtesting results for each asset.</li></ul><ol><li><strong>Shortlisting:</strong> Analyze the stored backtesting results and apply filters/sorting based on your chosen metrics to shortlist the best-performing assets.</li><li><strong>Risk Management:</strong> Remember, backtesting is for evaluation, not a guarantee of future success. Implement proper risk management strategies before using these shortlisted assets in real trading.</li></ol><pre>from sklearn.preprocessing import MinMaxScaler<br>from backtesting import Strategy, Backtest<br>import os<br>import json<br>import pandas as pd<br>import talib as ta<br>import numpy as np<br>from concurrent.futures import ThreadPoolExecutor<br>import threading<br><br>import time<br>import ccxt<br>from keras.models import save_model, load_model<br>import warnings<br>import decimal<br>import joblib<br>import nest_asyncio<br># from pandas.core.computation import PerformanceWarning<br><br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br><br># Load your pre-trained model<br># model = load_model(&#39;best_model_tcn_1sl_1tp_2p5SlTp_success.pkl&#39;)<br><br># Define the custom_assets dictionary outside the loop<br>custom_assets = {}<br><br># Function to load custom_assets from a text file<br>def load_custom_assets():<br>    if os.path.exists(&#39;custom_assets.txt&#39;):<br>        try:<br>            with open(&#39;custom_assets.txt&#39;, &#39;r&#39;) as txt_file:<br>                return json.loads(txt_file.read())<br>        except json.JSONDecodeError as e:<br>            print(f&quot;Error decoding JSON in custom_assets.txt: {e}&quot;)<br>            return {}<br>    else:<br>        print(&quot;custom_assets.txt file not found. Initializing an empty dictionary.&quot;)<br>        custom_assets = {}<br>        save_custom_assets(custom_assets)<br>        return custom_assets<br><br># Define a threading lock<br>file_lock = threading.Lock()<br><br># Function to save custom_assets to a text file<br>def save_custom_assets(custom_assets):<br>    with file_lock:<br>        with open(&#39;custom_assets.txt&#39;, &#39;w&#39;) as txt_file:<br>            json.dump(custom_assets, txt_file, indent=4)</pre><p>The provided code focuses on managing custom assets and preparing for multi-threaded backtesting. Here’s a breakdown:</p><p><strong>Imports:</strong></p><ul><li>Includes libraries for data manipulation (pandas, numpy), technical indicators (talib), backtesting framework (backtesting), threading, and others.</li></ul><p><strong>Custom Assets Management:</strong></p><p>custom_assets dictionary:</p><ul><li>Stores custom assets for backtesting (likely symbols or names).</li></ul><p>load_custom_assets function:</p><ul><li>Checks for a file named custom_assets.txt.</li><li>If the file exists, attempts to load the dictionary from the JSON content. Handles potential JSON decoding errors.</li><li>If the file doesn’t exist, initializes an empty dictionary, saves it, and returns it.</li></ul><p>save_custom_assets function:</p><ul><li>Uses a threading lock (file_lock) to ensure safe access to the file during potential concurrent writes.</li><li>Saves the custom_assets dictionary as JSON to the custom_assets.txt file.</li></ul><p><strong>Next Steps:</strong></p><ol><li><strong>Backtesting Function:</strong> You’ll likely define a function for the backtesting logic. This function would:</li></ol><ul><li>Take an asset symbol as input.</li><li>Download historical data for the asset.</li><li>Preprocess the data (scaling, feature engineering).</li><li>Make predictions using your loaded model.</li><li>Apply your backtesting strategy (similar to previous examples) incorporating predictions and potentially technical indicators.</li><li>Calculate and store backtesting results (Sharpe Ratio, drawdown, etc.) for the asset.</li></ul><p><strong>2. Multithreaded Backtesting:</strong></p><ul><li>You can utilize the ThreadPoolExecutor and threading capabilities to download and backtest multiple assets simultaneously. This can significantly improve efficiency compared to a sequential approach.</li><li>The custom_assets dictionary and its management functions will be crucial for providing asset symbols to the backtesting function within the thread pool.</li></ul><p><strong>Additional Notes:</strong></p><ul><li>Remember to replace &#39;best_model_tcn_1sl_1tp_2p5SlTp_success.pkl&#39; with the actual path to your trained model file.</li><li>Consider error handling and logging mechanisms for potential issues during data download, backtesting calculations, or thread management.</li></ul><pre>#NOTE: Fetching from binance Futures perpetual USDT assets , if error 4xx accours, it means, there is some restriction from your government or VPN server is connected to restrcited area for binance to work. You can use assets from the collection given by me in next cell<br><br>import requests<br><br>def get_binance_futures_assets():<br>    url = &quot;https://fapi.binance.com/fapi/v1/exchangeInfo&quot;<br>    try:<br>        response = requests.get(url)<br>        response.raise_for_status()  # Raise an exception for 4xx and 5xx status codes<br>        data = response.json()<br>        assets = [asset[&#39;symbol&#39;] for asset in data[&#39;symbols&#39;] if asset[&#39;contractType&#39;] == &#39;PERPETUAL&#39; and asset[&#39;quoteAsset&#39;] == &#39;USDT&#39;]<br>        return assets<br>    except requests.exceptions.RequestException as e:<br>        print(&quot;Failed to fetch Binance futures assets:&quot;, e)<br>        return []<br><br># Get all Binance futures USDT perpetual assets<br>futures_assets = get_binance_futures_assets()<br>print(&quot;Binance Futures USDT Perpetual Assets:&quot;)<br>print(futures_assets, len(futures_assets))</pre><pre>output:<br>&#39;BTCUSDT.P&#39;, &#39;ETHUSDT.P&#39;, &#39;BCHUSDT.P&#39;, &#39;XRPUSDT.P&#39;, &#39;EOSUSDT.P&#39;, &#39;LTCUSDT.P&#39;, &#39;TRXUSDT.P&#39;, &#39;ETCUSDT.P&#39;, <br>        &#39;LINKUSDT.P&#39;, &#39;XLMUSDT.P&#39;, &#39;ADAUSDT.P&#39;, &#39;XMRUSDT.P&#39;, &#39;DASHUSDT.P&#39;, &#39;ZECUSDT.P&#39;, &#39;XTZUSDT.P&#39;, &#39;BNBUSDT.P&#39;, <br>        &#39;ATOMUSDT.P&#39;, &#39;ONTUSDT.P&#39;, &#39;IOTAUSDT.P&#39;, &#39;BATUSDT.P&#39;, &#39;VETUSDT.P&#39;, &#39;NEOUSDT.P&#39;, &#39;QTUMUSDT.P&#39;, &#39;IOSTUSDT.P&#39;, <br>        &#39;THETAUSDT.P&#39;, &#39;ALGOUSDT.P&#39;, &#39;ZILUSDT.P&#39;, &#39;KNCUSDT.P&#39;, &#39;ZRXUSDT.P&#39;, &#39;COMPUSDT.P&#39;, &#39;OMGUSDT.P&#39;, &#39;DOGEUSDT.P&#39;, <br>        &#39;SXPUSDT.P&#39;, &#39;KAVAUSDT.P&#39;, &#39;BANDUSDT.P&#39;, &#39;RLCUSDT.P&#39;, &#39;WAVESUSDT.P&#39;, &#39;MKRUSDT.P&#39;, &#39;SNXUSDT.P&#39;, &#39;DOTUSDT.P&#39;, <br>        &#39;DEFIUSDT.P&#39;, &#39;YFIUSDT.P&#39;, &#39;BALUSDT.P&#39;, &#39;CRVUSDT.P&#39;, &#39;TRBUSDT.P&#39;, &#39;RUNEUSDT.P&#39;, &#39;SUSHIUSDT.P&#39;, &#39;SRMUSDT.P&#39;, <br>        &#39;EGLDUSDT.P&#39;, &#39;SOLUSDT.P&#39;, &#39;ICXUSDT.P&#39;, &#39;STORJUSDT.P&#39;, &#39;BLZUSDT.P&#39;, &#39;UNIUSDT.P&#39;, &#39;AVAXUSDT.P&#39;, &#39;FTMUSDT.P&#39;, <br>        &#39;HNTUSDT.P&#39;, &#39;ENJUSDT.P&#39;, &#39;FLMUSDT.P&#39;, &#39;TOMOUSDT.P&#39;, &#39;RENUSDT.P&#39;, &#39;KSMUSDT.P&#39;, &#39;NEARUSDT.P&#39;, &#39;AAVEUSDT.P&#39;, <br>        &#39;FILUSDT.P&#39;, &#39;RSRUSDT.P&#39;, &#39;LRCUSDT.P&#39;, &#39;MATICUSDT.P&#39;, &#39;OCEANUSDT.P&#39;, &#39;CVCUSDT.P&#39;, &#39;BELUSDT.P&#39;, &#39;CTKUSDT.P&#39;, <br>        &#39;AXSUSDT.P&#39;, &#39;ALPHAUSDT.P&#39;, &#39;ZENUSDT.P&#39;, &#39;SKLUSDT.P&#39;, &#39;GRTUSDT.P&#39;, &#39;1INCHUSDT.P&#39;, &#39;CHZUSDT.P&#39;, &#39;SANDUSDT.P&#39;, <br>        &#39;ANKRUSDT.P&#39;, &#39;BTSUSDT.P&#39;, &#39;LITUSDT.P&#39;, &#39;UNFIUSDT.P&#39;, &#39;REEFUSDT.P&#39;, &#39;RVNUSDT.P&#39;, &#39;SFPUSDT.P&#39;, &#39;XEMUSDT.P&#39;, <br>        &#39;COTIUSDT.P&#39;, &#39;CHRUSDT.P&#39;, &#39;MANAUSDT.P&#39;, &#39;ALICEUSDT.P&#39;, &#39;HBARUSDT.P&#39;, &#39;ONEUSDT.P&#39;, &#39;LINAUSDT.P&#39;, &#39;STMXUSDT.P&#39;, <br>        &#39;DENTUSDT.P&#39;, &#39;CELRUSDT.P&#39;, &#39;HOTUSDT.P&#39;, &#39;MTLUSDT.P&#39;, &#39;OGNUSDT.P&#39;, &#39;NKNUSDT.P&#39;, &#39;SCUSDT.P&#39;, &#39;DGBUSDT.P&#39;, <br>        &#39;1000SHIBUSDT.P&#39;, &#39;BAKEUSDT.P&#39;, &#39;GTCUSDT.P&#39;, &#39;BTCDOMUSDT.P&#39;, &#39;IOTXUSDT.P&#39;, &#39;AUDIOUSDT.P&#39;, &#39;RAYUSDT.P&#39;, &#39;C98USDT.P&#39;, <br>        &#39;MASKUSDT.P&#39;, &#39;ATAUSDT.P&#39;, &#39;DYDXUSDT.P&#39;, &#39;1000XECUSDT.P&#39;, &#39;GALAUSDT.P&#39;, &#39;CELOUSDT.P&#39;, &#39;ARUSDT.P&#39;, &#39;KLAYUSDT.P&#39;, <br>        &#39;ARPAUSDT.P&#39;, &#39;CTSIUSDT.P&#39;, &#39;LPTUSDT.P&#39;, &#39;ENSUSDT.P&#39;, &#39;PEOPLEUSDT.P&#39;, &#39;ANTUSDT.P&#39;, &#39;ROSEUSDT.P&#39;, &#39;DUSKUSDT.P&#39;, <br>        &#39;FLOWUSDT.P&#39;, &#39;IMXUSDT.P&#39;, &#39;API3USDT.P&#39;, &#39;GMTUSDT.P&#39;, &#39;APEUSDT.P&#39;, &#39;WOOUSDT.P&#39;, &#39;FTTUSDT.P&#39;, &#39;JASMYUSDT.P&#39;, &#39;DARUSDT.P&#39;, <br>        &#39;GALUSDT.P&#39;, &#39;OPUSDT.P&#39;, &#39;INJUSDT.P&#39;, &#39;STGUSDT.P&#39;, &#39;FOOTBALLUSDT.P&#39;, &#39;SPELLUSDT.P&#39;, &#39;1000LUNCUSDT.P&#39;, <br>        &#39;LUNA2USDT.P&#39;, &#39;LDOUSDT.P&#39;, &#39;CVXUSDT.P&#39;, &#39;ICPUSDT.P&#39;, &#39;APTUSDT.P&#39;, &#39;QNTUSDT.P&#39;, &#39;BLUEBIRDUSDT.P&#39;, &#39;FETUSDT.P&#39;, <br>        &#39;FXSUSDT.P&#39;, &#39;HOOKUSDT.P&#39;, &#39;MAGICUSDT.P&#39;, &#39;TUSDT.P&#39;, &#39;RNDRUSDT.P&#39;, &#39;HIGHUSDT.P&#39;, &#39;MINAUSDT.P&#39;, &#39;ASTRUSDT.P&#39;, <br>        &#39;AGIXUSDT.P&#39;, &#39;PHBUSDT.P&#39;, &#39;GMXUSDT.P&#39;, &#39;CFXUSDT.P&#39;, &#39;STXUSDT.P&#39;, &#39;COCOSUSDT.P&#39;, &#39;BNXUSDT.P&#39;, &#39;ACHUSDT.P&#39;, <br>        &#39;SSVUSDT.P&#39;, &#39;CKBUSDT.P&#39;, &#39;PERPUSDT.P&#39;, &#39;TRUUSDT.P&#39;, &#39;LQTYUSDT.P&#39;, &#39;USDCUSDT.P&#39;, &#39;IDUSDT.P&#39;, &#39;ARBUSDT.P&#39;, <br>        &#39;JOEUSDT.P&#39;, &#39;TLMUSDT.P&#39;, &#39;AMBUSDT.P&#39;, &#39;LEVERUSDT.P&#39;, &#39;RDNTUSDT.P&#39;, &#39;HFTUSDT.P&#39;, &#39;XVSUSDT.P&#39;, &#39;BLURUSDT.P&#39;, <br>        &#39;EDUUSDT.P&#39;, &#39;IDEXUSDT.P&#39;, &#39;SUIUSDT.P&#39;, &#39;1000PEPEUSDT.P&#39;, &#39;1000FLOKIUSDT.P&#39;, &#39;UMAUSDT.P&#39;, &#39;RADUSDT.P&#39;, <br>        &#39;KEYUSDT.P&#39;, &#39;COMBOUSDT.P&#39;, &#39;NMRUSDT.P&#39;, &#39;MAVUSDT.P&#39;, &#39;MDTUSDT.P&#39;, &#39;XVGUSDT.P&#39;, &#39;WLDUSDT.P&#39;, &#39;PENDLEUSDT.P&#39;, <br>        &#39;ARKMUSDT.P&#39;, &#39;AGLDUSDT.P&#39;, &#39;YGGUSDT.P&#39;, &#39;DODOXUSDT.P&#39;, &#39;BNTUSDT.P&#39;, &#39;OXTUSDT.P&#39;, &#39;SEIUSDT.P&#39;, &#39;CYBERUSDT.P&#39;, <br>        &#39;HIFIUSDT.P&#39;, &#39;ARKUSDT.P&#39;, &#39;FRONTUSDT.P&#39;, &#39;GLMRUSDT.P&#39;, &#39;BICOUSDT.P&#39;, &#39;STRAXUSDT.P&#39;, &#39;LOOMUSDT.P&#39;, &#39;BIGTIMEUSDT.P&#39;, <br>        &#39;BONDUSDT.P&#39;, &#39;ORBSUSDT.P&#39;, &#39;STPTUSDT.P&#39;, &#39;WAXPUSDT.P&#39;, &#39;BSVUSDT.P&#39;, &#39;RIFUSDT.P&#39;, &#39;POLYXUSDT.P&#39;, &#39;GASUSDT.P&#39;, <br>        &#39;POWRUSDT.P&#39;, &#39;SLPUSDT.P&#39;, &#39;TIAUSDT.P&#39;, &#39;SNTUSDT.P&#39;, &#39;CAKEUSDT.P&#39;, &#39;MEMEUSDT.P&#39;, &#39;TWTUSDT.P&#39;, &#39;TOKENUSDT.P&#39;, <br>        &#39;ORDIUSDT.P&#39;, &#39;STEEMUSDT.P&#39;, &#39;BADGERUSDT.P&#39;, &#39;ILVUSDT.P&#39;, &#39;NTRNUSDT.P&#39;, &#39;MBLUSDT.P&#39;, &#39;KASUSDT.P&#39;, &#39;BEAMXUSDT.P&#39;, <br>        &#39;1000BONKUSDT.P&#39;, &#39;PYTHUSDT.P&#39;, &#39;SUPERUSDT.P&#39;, &#39;USTCUSDT.P&#39;, &#39;ONGUSDT.P&#39;, &#39;ETHWUSDT.P&#39;, &#39;JTOUSDT.P&#39;, &#39;1000SATSUSDT.P&#39;, <br>        &#39;AUCTIONUSDT.P&#39;, &#39;1000RATSUSDT.P&#39;, &#39;ACEUSDT.P&#39;, &#39;MOVRUSDT.P&#39;, &#39;NFPUSDT.P&#39;, &#39;AIUSDT.P&#39;, &#39;XAIUSDT.P&#39;, <br>        &#39;WIFUSDT.P&#39;, &#39;MANTAUSDT.P&#39;, &#39;ONDOUSDT.P&#39;, &#39;LSKUSDT.P&#39;, &#39;ALTUSDT.P&#39;, &#39;JUPUSDT.P&#39;, &#39;ZETAUSDT.P&#39;, &#39;RONINUSDT.P&#39;, <br>        &#39;DYMUSDT.P&#39;, &#39;OMUSDT.P&#39;, &#39;PIXELUSDT.P&#39;, &#39;STRKUSDT.P&#39;, &#39;MAVIAUSDT.P&#39;, &#39;GLMUSDT.P&#39;, &#39;PORTALUSDT.P&#39;, &#39;TONUSDT.P&#39;, <br>        &#39;AXLUSDT.P&#39;, &#39;MYROUSDT.P&#39;, &#39;METISUSDT.P&#39;, &#39;AEVOUSDT.P&#39;, &#39;VANRYUSDT.P&#39;, &#39;BOMEUSDT.P&#39;, &#39;ETHFIUSDT.P&#39;, &#39;ENAUSDT.P&#39;, <br>        &#39;WUSDT.P&#39;, &#39;TNSRUSDT.P&#39;, &#39;SAGAUSDT.P&#39;, &#39;TAOUSDT.P&#39;, &#39;OMNIUSDT.P&#39;, &#39;REZUSDT.P&#39;</pre><p>This code snippet retrieves a list of perpetual USDT contracts available on Binance Futures using the official Binance API. Here’s a breakdown:</p><p><strong>Function:</strong></p><p>get_binance_futures_assets function:</p><ul><li>Defines the API endpoint URL for retrieving exchange information.</li><li>Uses a try-except block to handle potential errors during the request.</li></ul><p>Within the try block:</p><ul><li>Makes a GET request to the Binance API endpoint.</li><li>Raises an exception for status codes in the 4xx (client errors) or 5xx (server errors) range to indicate failures.</li><li>Parses the JSON response from the successful request.</li></ul><p>Extracts symbols from the response data:</p><ul><li>Iterates through the &#39;symbols&#39; list in the JSON data.</li></ul><p>Filters for assets with these criteria:</p><ul><li>&#39;contractType&#39; is &#39;PERPETUAL&#39; (indicates perpetual contracts).</li><li>&#39;quoteAsset&#39; is &#39;USDT&#39; (indicates USDT-quoted contracts).</li><li>Creates a list of asset symbols meeting the criteria and returns it.</li><li>The except block catches potential request exceptions and prints an error message. It also returns an empty list in case of failures.</li></ul><p><strong>Printing Results:</strong></p><ul><li>Calls the get_binance_futures_assets function to retrieve the asset list.</li><li>Prints a message indicating the retrieved assets and their count.</li></ul><p><strong>Additional Notes:</strong></p><ul><li>This approach leverages the official Binance API, which might be subject to rate limits or changes in the future. Consider implementing appropriate error handling and retry mechanisms.</li><li>The code assumes a successful API call. You might want to add checks for specific error codes (e.g., 429 for “Too Many Requests”) and handle them gracefully (e.g., retrying after a delay).</li></ul><pre># !pip install --upgrade --no-cache-dir git+https://github.com/rongardF/tvdatafeed.git<br><br><br>import os<br>import json<br>import asyncio<br>from datetime import datetime, timedelta<br>import pandas as pd<br>from tvDatafeed import TvDatafeed, Interval<br><br># Initialize TvDatafeed object<br># username = &#39;YourTradingViewUsername&#39;<br># password = &#39;YourTradingViewPassword&#39;<br><br># tv = TvDatafeed(username, password)<br>tv = TvDatafeed()<br><br>timeframe = &#39;15m&#39;<br>interval = None<br><br>if timeframe == &#39;1m&#39;:<br>    interval = Interval.in_1_minute<br>elif timeframe == &#39;3m&#39;:<br>    interval = Interval.in_3_minute<br>elif timeframe == &#39;5m&#39;:<br>    interval = Interval.in_5_minute<br>elif timeframe == &#39;15m&#39;:<br>    interval = Interval.in_15_minute<br>elif timeframe == &#39;30m&#39;:<br>    interval = Interval.in_30_minute<br>elif timeframe == &#39;45m&#39;:<br>    interval = Interval.in_45_minute<br>elif timeframe == &#39;1h&#39;:<br>    interval = Interval.in_1_hour<br>elif timeframe == &#39;2h&#39;:<br>    interval = Interval.in_2_hour<br>elif timeframe == &#39;4h&#39;:<br>    interval = Interval.in_4_hour<br>elif timeframe == &#39;1d&#39;:<br>    interval = Interval.in_daily<br>elif timeframe == &#39;1w&#39;:<br>    interval = Interval.in_weekly<br>elif timeframe == &#39;1M&#39;:<br>    interval = Interval.in_monthly<br><br># NOTE: List of symbols around 126 mentioned here. You can change to your own set of lists if you know the tradingview code for the symbol you want to download.<br>data = [<br>    &#39;BTCUSDT.P&#39;, &#39;ETHUSDT.P&#39;, &#39;BCHUSDT.P&#39;, &#39;XRPUSDT.P&#39;, &#39;EOSUSDT.P&#39;, &#39;LTCUSDT.P&#39;, &#39;TRXUSDT.P&#39;, &#39;ETCUSDT.P&#39;, <br>        &#39;LINKUSDT.P&#39;, &#39;XLMUSDT.P&#39;, &#39;ADAUSDT.P&#39;, &#39;XMRUSDT.P&#39;, &#39;DASHUSDT.P&#39;, &#39;ZECUSDT.P&#39;, &#39;XTZUSDT.P&#39;, &#39;BNBUSDT.P&#39;, <br>        &#39;ATOMUSDT.P&#39;, &#39;ONTUSDT.P&#39;, &#39;IOTAUSDT.P&#39;, &#39;BATUSDT.P&#39;, &#39;VETUSDT.P&#39;, &#39;NEOUSDT.P&#39;, &#39;QTUMUSDT.P&#39;, &#39;IOSTUSDT.P&#39;, <br>        &#39;THETAUSDT.P&#39;, &#39;ALGOUSDT.P&#39;, &#39;ZILUSDT.P&#39;, &#39;KNCUSDT.P&#39;, &#39;ZRXUSDT.P&#39;, &#39;COMPUSDT.P&#39;, &#39;OMGUSDT.P&#39;, &#39;DOGEUSDT.P&#39;, <br>        &#39;SXPUSDT.P&#39;, &#39;KAVAUSDT.P&#39;, &#39;BANDUSDT.P&#39;, &#39;RLCUSDT.P&#39;, &#39;WAVESUSDT.P&#39;, &#39;MKRUSDT.P&#39;, &#39;SNXUSDT.P&#39;, &#39;DOTUSDT.P&#39;, <br>        &#39;DEFIUSDT.P&#39;, &#39;YFIUSDT.P&#39;, &#39;BALUSDT.P&#39;, &#39;CRVUSDT.P&#39;, &#39;TRBUSDT.P&#39;, &#39;RUNEUSDT.P&#39;, &#39;SUSHIUSDT.P&#39;, &#39;SRMUSDT.P&#39;, <br>        &#39;EGLDUSDT.P&#39;, &#39;SOLUSDT.P&#39;, &#39;ICXUSDT.P&#39;, &#39;STORJUSDT.P&#39;, &#39;BLZUSDT.P&#39;, &#39;UNIUSDT.P&#39;, &#39;AVAXUSDT.P&#39;, &#39;FTMUSDT.P&#39;, <br>        &#39;HNTUSDT.P&#39;, &#39;ENJUSDT.P&#39;, &#39;FLMUSDT.P&#39;, &#39;TOMOUSDT.P&#39;, &#39;RENUSDT.P&#39;, &#39;KSMUSDT.P&#39;, &#39;NEARUSDT.P&#39;, &#39;AAVEUSDT.P&#39;, <br>        &#39;FILUSDT.P&#39;, &#39;RSRUSDT.P&#39;, &#39;LRCUSDT.P&#39;, &#39;MATICUSDT.P&#39;, &#39;OCEANUSDT.P&#39;, &#39;CVCUSDT.P&#39;, &#39;BELUSDT.P&#39;, &#39;CTKUSDT.P&#39;, <br>        &#39;AXSUSDT.P&#39;, &#39;ALPHAUSDT.P&#39;, &#39;ZENUSDT.P&#39;, &#39;SKLUSDT.P&#39;, &#39;GRTUSDT.P&#39;, &#39;1INCHUSDT.P&#39;, &#39;CHZUSDT.P&#39;, &#39;SANDUSDT.P&#39;, <br>        &#39;ANKRUSDT.P&#39;, &#39;BTSUSDT.P&#39;, &#39;LITUSDT.P&#39;, &#39;UNFIUSDT.P&#39;, &#39;REEFUSDT.P&#39;, &#39;RVNUSDT.P&#39;, &#39;SFPUSDT.P&#39;, &#39;XEMUSDT.P&#39;, <br>        &#39;COTIUSDT.P&#39;, &#39;CHRUSDT.P&#39;, &#39;MANAUSDT.P&#39;, &#39;ALICEUSDT.P&#39;, &#39;HBARUSDT.P&#39;, &#39;ONEUSDT.P&#39;, &#39;LINAUSDT.P&#39;, &#39;STMXUSDT.P&#39;, <br>        &#39;DENTUSDT.P&#39;, &#39;CELRUSDT.P&#39;, &#39;HOTUSDT.P&#39;, &#39;MTLUSDT.P&#39;, &#39;OGNUSDT.P&#39;, &#39;NKNUSDT.P&#39;, &#39;SCUSDT.P&#39;, &#39;DGBUSDT.P&#39;, <br>        &#39;1000SHIBUSDT.P&#39;, &#39;BAKEUSDT.P&#39;, &#39;GTCUSDT.P&#39;, &#39;BTCDOMUSDT.P&#39;, &#39;IOTXUSDT.P&#39;, &#39;AUDIOUSDT.P&#39;, &#39;RAYUSDT.P&#39;, &#39;C98USDT.P&#39;, <br>        &#39;MASKUSDT.P&#39;, &#39;ATAUSDT.P&#39;, &#39;DYDXUSDT.P&#39;, &#39;1000XECUSDT.P&#39;, &#39;GALAUSDT.P&#39;, &#39;CELOUSDT.P&#39;, &#39;ARUSDT.P&#39;, &#39;KLAYUSDT.P&#39;, <br>        &#39;ARPAUSDT.P&#39;, &#39;CTSIUSDT.P&#39;, &#39;LPTUSDT.P&#39;, &#39;ENSUSDT.P&#39;, &#39;PEOPLEUSDT.P&#39;, &#39;ANTUSDT.P&#39;, &#39;ROSEUSDT.P&#39;, &#39;DUSKUSDT.P&#39;, <br>        &#39;FLOWUSDT.P&#39;, &#39;IMXUSDT.P&#39;, &#39;API3USDT.P&#39;, &#39;GMTUSDT.P&#39;, &#39;APEUSDT.P&#39;, &#39;WOOUSDT.P&#39;, &#39;FTTUSDT.P&#39;, &#39;JASMYUSDT.P&#39;, &#39;DARUSDT.P&#39;, <br>        &#39;GALUSDT.P&#39;, &#39;OPUSDT.P&#39;, &#39;INJUSDT.P&#39;, &#39;STGUSDT.P&#39;, &#39;FOOTBALLUSDT.P&#39;, &#39;SPELLUSDT.P&#39;, &#39;1000LUNCUSDT.P&#39;, <br>        &#39;LUNA2USDT.P&#39;, &#39;LDOUSDT.P&#39;, &#39;CVXUSDT.P&#39;, &#39;ICPUSDT.P&#39;, &#39;APTUSDT.P&#39;, &#39;QNTUSDT.P&#39;, &#39;BLUEBIRDUSDT.P&#39;, &#39;FETUSDT.P&#39;, <br>        &#39;FXSUSDT.P&#39;, &#39;HOOKUSDT.P&#39;, &#39;MAGICUSDT.P&#39;, &#39;TUSDT.P&#39;, &#39;RNDRUSDT.P&#39;, &#39;HIGHUSDT.P&#39;, &#39;MINAUSDT.P&#39;, &#39;ASTRUSDT.P&#39;, <br>        &#39;AGIXUSDT.P&#39;, &#39;PHBUSDT.P&#39;, &#39;GMXUSDT.P&#39;, &#39;CFXUSDT.P&#39;, &#39;STXUSDT.P&#39;, &#39;COCOSUSDT.P&#39;, &#39;BNXUSDT.P&#39;, &#39;ACHUSDT.P&#39;, <br>        &#39;SSVUSDT.P&#39;, &#39;CKBUSDT.P&#39;, &#39;PERPUSDT.P&#39;, &#39;TRUUSDT.P&#39;, &#39;LQTYUSDT.P&#39;, &#39;USDCUSDT.P&#39;, &#39;IDUSDT.P&#39;, &#39;ARBUSDT.P&#39;, <br>        &#39;JOEUSDT.P&#39;, &#39;TLMUSDT.P&#39;, &#39;AMBUSDT.P&#39;, &#39;LEVERUSDT.P&#39;, &#39;RDNTUSDT.P&#39;, &#39;HFTUSDT.P&#39;, &#39;XVSUSDT.P&#39;, &#39;BLURUSDT.P&#39;, <br>        &#39;EDUUSDT.P&#39;, &#39;IDEXUSDT.P&#39;, &#39;SUIUSDT.P&#39;, &#39;1000PEPEUSDT.P&#39;, &#39;1000FLOKIUSDT.P&#39;, &#39;UMAUSDT.P&#39;, &#39;RADUSDT.P&#39;, <br>        &#39;KEYUSDT.P&#39;, &#39;COMBOUSDT.P&#39;, &#39;NMRUSDT.P&#39;, &#39;MAVUSDT.P&#39;, &#39;MDTUSDT.P&#39;, &#39;XVGUSDT.P&#39;, &#39;WLDUSDT.P&#39;, &#39;PENDLEUSDT.P&#39;, <br>        &#39;ARKMUSDT.P&#39;, &#39;AGLDUSDT.P&#39;, &#39;YGGUSDT.P&#39;, &#39;DODOXUSDT.P&#39;, &#39;BNTUSDT.P&#39;, &#39;OXTUSDT.P&#39;, &#39;SEIUSDT.P&#39;, &#39;CYBERUSDT.P&#39;, <br>        &#39;HIFIUSDT.P&#39;, &#39;ARKUSDT.P&#39;, &#39;FRONTUSDT.P&#39;, &#39;GLMRUSDT.P&#39;, &#39;BICOUSDT.P&#39;, &#39;STRAXUSDT.P&#39;, &#39;LOOMUSDT.P&#39;, &#39;BIGTIMEUSDT.P&#39;, <br>        &#39;BONDUSDT.P&#39;, &#39;ORBSUSDT.P&#39;, &#39;STPTUSDT.P&#39;, &#39;WAXPUSDT.P&#39;, &#39;BSVUSDT.P&#39;, &#39;RIFUSDT.P&#39;, &#39;POLYXUSDT.P&#39;, &#39;GASUSDT.P&#39;, <br>        &#39;POWRUSDT.P&#39;, &#39;SLPUSDT.P&#39;, &#39;TIAUSDT.P&#39;, &#39;SNTUSDT.P&#39;, &#39;CAKEUSDT.P&#39;, &#39;MEMEUSDT.P&#39;, &#39;TWTUSDT.P&#39;, &#39;TOKENUSDT.P&#39;, <br>        &#39;ORDIUSDT.P&#39;, &#39;STEEMUSDT.P&#39;, &#39;BADGERUSDT.P&#39;, &#39;ILVUSDT.P&#39;, &#39;NTRNUSDT.P&#39;, &#39;MBLUSDT.P&#39;, &#39;KASUSDT.P&#39;, &#39;BEAMXUSDT.P&#39;, <br>        &#39;1000BONKUSDT.P&#39;, &#39;PYTHUSDT.P&#39;, &#39;SUPERUSDT.P&#39;, &#39;USTCUSDT.P&#39;, &#39;ONGUSDT.P&#39;, &#39;ETHWUSDT.P&#39;, &#39;JTOUSDT.P&#39;, &#39;1000SATSUSDT.P&#39;, <br>        &#39;AUCTIONUSDT.P&#39;, &#39;1000RATSUSDT.P&#39;, &#39;ACEUSDT.P&#39;, &#39;MOVRUSDT.P&#39;, &#39;NFPUSDT.P&#39;, &#39;AIUSDT.P&#39;, &#39;XAIUSDT.P&#39;, <br>        &#39;WIFUSDT.P&#39;, &#39;MANTAUSDT.P&#39;, &#39;ONDOUSDT.P&#39;, &#39;LSKUSDT.P&#39;, &#39;ALTUSDT.P&#39;, &#39;JUPUSDT.P&#39;, &#39;ZETAUSDT.P&#39;, &#39;RONINUSDT.P&#39;, <br>        &#39;DYMUSDT.P&#39;, &#39;OMUSDT.P&#39;, &#39;PIXELUSDT.P&#39;, &#39;STRKUSDT.P&#39;, &#39;MAVIAUSDT.P&#39;, &#39;GLMUSDT.P&#39;, &#39;PORTALUSDT.P&#39;, &#39;TONUSDT.P&#39;, <br>        &#39;AXLUSDT.P&#39;, &#39;MYROUSDT.P&#39;, &#39;METISUSDT.P&#39;, &#39;AEVOUSDT.P&#39;, &#39;VANRYUSDT.P&#39;, &#39;BOMEUSDT.P&#39;, &#39;ETHFIUSDT.P&#39;, &#39;ENAUSDT.P&#39;, <br>        &#39;WUSDT.P&#39;, &#39;TNSRUSDT.P&#39;, &#39;SAGAUSDT.P&#39;, &#39;TAOUSDT.P&#39;, &#39;OMNIUSDT.P&#39;, &#39;REZUSDT.P&#39;<br>]<br><br>nest_asyncio.apply()<br><br># Define data download function<br>async def download_data(symbol):<br>    try:<br>        data = tv.get_hist(symbol=symbol, exchange=&#39;BINANCE&#39;, interval=interval, n_bars=20000, extended_session=True)<br>        if not data.empty:<br>            # Convert Date objects to strings<br>            # data[&#39;Date&#39;] = data.index.date.astype(str)<br>            # data[&#39;Time&#39;] = data.index.time.astype(str)<br>            data[&#39;date&#39;] = data.index.astype(str)  # Add a new column for timestamps<br>            folder_name = f&quot;tradingview_crypto_assets_{timeframe}&quot;<br>            os.makedirs(folder_name, exist_ok=True)<br>            # Replace &quot;USDT.P&quot; with &quot;/USDT:USDT&quot; in the file name<br>            symbol_file_name = symbol.replace(&quot;USDT.P&quot;, &quot;&quot;) + &quot;.json&quot;<br>            file_name = os.path.join(folder_name, symbol_file_name)<br>            # Convert DataFrame to dictionary<br>            data_dict = data.to_dict(orient=&#39;records&#39;)<br>            with open(file_name, &quot;w&quot;) as file:<br>                # Serialize dictionary to JSON<br>                json.dump(data_dict, file)<br>            print(f&quot;Data for {symbol} downloaded and saved successfully.&quot;)<br>        else:<br>            print(f&quot;No data available for {symbol}.&quot;)<br>    except Exception as e:<br>        print(f&quot;Error occurred while downloading data for {symbol}: {e}&quot;)<br><br># Define main function to run async download tasks<br>async def main():<br>    tasks = [download_data(symbol) for symbol in data]<br>    await asyncio.gather(*tasks)<br><br># Run the main function<br>asyncio.run(main())<br><br></pre><p>This code snippet demonstrates how to download historical cryptocurrency data from TradingView for multiple assets using the tvDatafeed library. Here&#39;s a breakdown:</p><p><strong>Imports:</strong></p><ul><li>Includes libraries for asynchronous programming (asyncio), working with dates (datetime), data manipulation (pandas), and file handling (os, json).</li><li>Imports the TvDatafeed class from tvDatafeed for interacting with TradingView.</li></ul><p><strong>TvDatafeed Object:</strong></p><ul><li>Initializes a TvDatafeed object (tv) without username and password (assuming a free account). Paid accounts might require credentials.</li></ul><p><strong>Timeframe and Interval:</strong></p><ul><li>Sets the desired timeframe (timeframe) for data download (e.g., &quot;15m&quot; for 15-minute intervals).</li><li>Maps the timeframe to the corresponding Interval enumeration value using a series of if statements.</li></ul><p><strong>Symbols List:</strong></p><ul><li>Defines a long list of symbols (data) representing cryptocurrencies on Binance Futures with perpetual USDT contracts (identified by &quot;.P&quot; suffix).</li></ul><p><strong>Asynchronous Programming Setup:</strong></p><ul><li>Initializes nest_asyncio.apply() to enable the use of asynchronous functions within a non-asynchronous context.</li></ul><p><strong>Download Function:</strong></p><ul><li>Defines an asynchronous function download_data(symbol) that takes a symbol as input.</li></ul><p>Attempts to download historical data for the symbol using tv.get_hist:</p><ul><li>Specifies the symbol, exchange (“BINANCE”), interval, number of bars (20000), and extended session (to potentially capture pre-market/after-market data).</li><li>Checks if downloaded data (data) is not empty.</li></ul><p>If data is available:</p><ul><li>Converts the index (timestamps) to strings in a new column named “date”.</li><li>Creates a folder named tradingview_crypto_assets_{timeframe} to store the downloaded data (creates it if it doesn&#39;t exist).</li><li>Constructs the filename by replacing “.P” with “/USDT:USDT” in the symbol and appending “.json”.</li><li>Converts the DataFrame to a dictionary using to_dict(orient=&#39;records&#39;).</li><li>Saves the dictionary as JSON to the constructed filename.</li><li>Prints a success message.</li></ul><p>If no data is available:</p><ul><li>Prints a message indicating no data for the symbol.</li><li>Catches any exceptions (Exception) during download and prints an error message with the exception details.</li></ul><p><strong>Main Function:</strong></p><ul><li>Defines an asynchronous function main that:</li><li>Creates a list of asynchronous tasks (tasks) using list comprehension. Each task calls download_data for a symbol from the data list.</li><li>Uses asyncio.gather(*tasks) to run all download tasks concurrently.</li></ul><p><strong>Running the Download:</strong></p><ul><li>Uses asyncio.run(main()) to execute the asynchronous tasks within the main function.</li></ul><p><strong>Important Notes:</strong></p><ul><li>This code retrieves data for a large number of symbols. Downloading a significant amount of data might exceed free account limitations or take a long time. Consider rate limits and adjust accordingly.</li><li>The code assumes a specific symbol format with the “.P” suffix. You might need to modify it for different symbol formats.</li><li>Error handling can be improved by implementing specific checks for different exception types (e.g., network errors, API errors).</li></ul><h4>Hyperoptimization of Multiple Assets for Specific ML/DL Model:</h4><pre>from pandas import Timestamp<br><br># Define a function to process each JSON file<br>def process_json(file_path):<br>    # try:<br>    with open(file_path, &quot;r&quot;) as f:<br>        data = json.load(f)<br><br>    df = pd.DataFrame(data)<br><br>    df.rename(columns={&#39;date&#39;: &quot;Date&quot;, &#39;open&#39;: &quot;Open&quot;, &#39;high&#39;: &quot;High&quot;, &#39;low&#39;: &quot;Low&quot;, &#39;close&#39;: &quot;Adj Close&quot;, &#39;volume&#39;: &quot;Volume&quot;}, inplace=True)<br><br>    df[&quot;Date&quot;] = pd.to_datetime(df[&#39;Date&#39;])<br><br>    df.set_index(&quot;Date&quot;, inplace=True)<br><br>    df[&#39;Close&#39;] = df[&#39;Adj Close&#39;]<br><br>    symbol_name = df[&#39;symbol&#39;].iloc[0]  # Assuming all rows have the same symbol<br>    symbol_name = symbol_name.replace(&quot;BINANCE:&quot;, &quot;&quot;)<br>    symbol_name = symbol_name.replace(&quot;USDT.P&quot;, &quot;/USDT:USDT&quot;)<br>    df.drop(columns=[&#39;symbol&#39;], inplace=True)<br><br>    target_prediction_number = 2<br>    time_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br>    name_periods = [6, 8, 10, 12, 14, 16, 18, 22, 26, 33, 44, 55]<br><br>    new_columns = []<br>    for period in time_periods:<br>        for nperiod in name_periods:<br>            df[f&#39;ATR_{period}&#39;] = ta.ATR(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;EMA_{period}&#39;] = ta.EMA(df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;RSI_{period}&#39;] = ta.RSI(df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;VWAP_{period}&#39;] = ta.SUM(df[&#39;Volume&#39;] * (df[&#39;High&#39;] + df[&#39;Low&#39;] + df[&#39;Close&#39;]) / 3, timeperiod=period) / ta.SUM(df[&#39;Volume&#39;], timeperiod=period)<br>            df[f&#39;ROC_{period}&#39;] = ta.ROC(df[&#39;Close&#39;], timeperiod=period)<br>            df[f&#39;KC_upper_{period}&#39;] = ta.EMA(df[&#39;High&#39;], timeperiod=period)<br>            df[f&#39;KC_middle_{period}&#39;] = ta.EMA(df[&#39;Low&#39;], timeperiod=period)<br>            df[f&#39;Donchian_upper_{period}&#39;] = ta.MAX(df[&#39;High&#39;], timeperiod=period)<br>            df[f&#39;Donchian_lower_{period}&#39;] = ta.MIN(df[&#39;Low&#39;], timeperiod=period)<br>            macd, macd_signal, _ = ta.MACD(df[&#39;Close&#39;], fastperiod=(period + 12), slowperiod=(period + 26), signalperiod=(period + 9))<br>            df[f&#39;MACD_{period}&#39;] = macd<br>            df[f&#39;MACD_signal_{period}&#39;] = macd_signal<br>            bb_upper, bb_middle, bb_lower = ta.BBANDS(df[&#39;Close&#39;], timeperiod=period, nbdevup=2, nbdevdn=2)<br>            df[f&#39;BB_upper_{period}&#39;] = bb_upper<br>            df[f&#39;BB_middle_{period}&#39;] = bb_middle<br>            df[f&#39;BB_lower_{period}&#39;] = bb_lower<br>            df[f&#39;EWO_{period}&#39;] = ta.SMA(df[&#39;Close&#39;], timeperiod=(period+5)) - ta.SMA(df[&#39;Close&#39;], timeperiod=(period+35))<br><br>    df[&quot;Returns&quot;] = (df[&quot;Adj Close&quot;] / df[&quot;Adj Close&quot;].shift(target_prediction_number)) - 1<br>    df[&quot;Range&quot;] = (df[&quot;High&quot;] / df[&quot;Low&quot;]) - 1<br>    df[&quot;Volatility&quot;] = df[&#39;Returns&#39;].rolling(window=target_prediction_number).std()<br><br>    # Volume-Based Indicators<br>    df[&#39;OBV&#39;] = ta.OBV(df[&#39;Close&#39;], df[&#39;Volume&#39;])<br>    df[&#39;ADL&#39;] = ta.AD(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;], df[&#39;Volume&#39;])<br><br><br>    # Momentum-Based Indicators<br>    df[&#39;Stoch_Oscillator&#39;] = ta.STOCH(df[&#39;High&#39;], df[&#39;Low&#39;], df[&#39;Close&#39;])[0]<br><br>    df[&#39;PSAR&#39;] = ta.SAR(df[&#39;High&#39;], df[&#39;Low&#39;], acceleration=0.02, maximum=0.2)<br>    # More feature engineering...<br>    timeframe_diff = df.index[-1] - df.index[-2]<br>    timeframe = None<br><br>    # Define timeframe based on time difference<br>    if timeframe_diff == pd.Timedelta(minutes=1):<br>        timeframe = &#39;1m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=3):<br>        timeframe = &#39;3m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=5):<br>        timeframe = &#39;5m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=15):<br>        timeframe = &#39;15m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=30):<br>        timeframe = &#39;30m&#39;<br>    elif timeframe_diff == pd.Timedelta(minutes=45):<br>        timeframe = &#39;45m&#39;<br>    elif timeframe_diff == pd.Timedelta(hours=1):<br>        timeframe = &#39;1h&#39;<br>    elif timeframe_diff == pd.Timedelta(days=1):<br>        timeframe = &#39;1d&#39;<br>    elif timeframe_diff == pd.Timedelta(weeks=1):<br>        timeframe = &#39;1w&#39;<br>    else:<br>        timeframe = &#39;Not sure&#39;<br>        <br>    # print(&#39;timeframe is - &#39;, timeframe)<br><br>    # Remove rows containing inf or nan values<br>    df.dropna(inplace=True)<br><br>    # Scaling<br>    scaler = MinMaxScaler(feature_range=(0,1))<br>    X = df.copy()<br>    X_scale = scaler.fit_transform(X)<br><br><br><br>    # Define a function to reshape the data<br>    def reshape_data(data, time_steps):<br>        samples = len(data) - time_steps + 1<br>        reshaped_data = np.zeros((samples, time_steps, data.shape[1]))<br>        for i in range(samples):<br>            reshaped_data[i] = data[i:i + time_steps]<br>        return reshaped_data<br><br>    # Reshape the scaled X data<br>    time_steps = 1  # Adjust the number of time steps as needed<br>    X_reshaped = reshape_data(X_scale, time_steps)<br><br>    # Now X_reshaped has the desired three-dimensional shape: (samples, time_steps, features)<br>    # Each sample contains scaled data for a specific time window<br><br>    X = X_reshaped<br><br>    # Use the loaded model to predict on the entire dataset<br>    df_ens = df.copy() <br><br>    # df_ens[&#39;voting_classifier_ensembel_with_scale&#39;] = np.argmax(model.predict(X), axis=1)<br>    df_ens[&#39;voting_classifier_ensembel_with_scale&#39;] = np.argmax(model.predict(X), axis=2)<br><br>    df_ens[&#39;vcews&#39;] = df_ens[&#39;voting_classifier_ensembel_with_scale&#39;].shift(0).dropna().astype(int)<br><br>    df_ens = df_ens.dropna()<br><br>    # Backtesting<br>    df_ens = df_ens.reset_index(inplace=False)<br>    df_ens[&#39;Date&#39;] = pd.to_datetime(df_ens[&#39;Date&#39;])<br>    df_ens.set_index(&#39;Date&#39;, inplace=True)<br><br>    best_params = {&#39;Optimizer&#39;: &#39;Return [%]&#39;,<br>        &#39;model_trained_on&#39;: model_name,<br>        &#39;OptimizerResult_Cross&#39;: 617.5341106880867,<br>        &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_long&#39;: 24,<br>        &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_short&#39;: 24,<br>        &#39;BEST_LEVERAGE_margin_leverage&#39;: 1,<br>        &#39;TRAILING_ACTIVATE_PCT&#39;: 10,<br>        &#39;TRAILING_STOP_PCT&#39; : 5,<br>        &#39;roi_at_50&#39; : 24,<br>        &#39;roi_at_100&#39; : 20,<br>        &#39;roi_at_150&#39; : 18,<br>        &#39;roi_at_200&#39; : 15,<br>        &#39;roi_at_300&#39; : 13,<br>        &#39;roi_at_500&#39; : 10}<br><br>    # Define SIGNAL_3 function<br>    def SIGNAL_3(df_ens):<br>        return df_ens[&#39;vcews&#39;]<br><br>    # Define MyCandlesStrat_3 class<br>    class MyCandlesStrat_3(Strategy):  <br>        sl_pct_l = best_params[&#39;BEST_STOP_LOSS_sl_pct_long&#39;] <br>        tp_pct_l = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_long&#39;] <br>        limit_l = best_params[&#39;BEST_LIMIT_ORDER_limit_long&#39;] <br>        sl_pct_s = best_params[&#39;BEST_STOP_LOSS_sl_pct_short&#39;] <br>        tp_pct_s = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_short&#39;] <br>        limit_s = best_params[&#39;BEST_LIMIT_ORDER_limit_short&#39;] <br>        margin_leverage = best_params[&#39;BEST_LEVERAGE_margin_leverage&#39;]<br>        TRAILING_ACTIVATE_PCT = best_params[&#39;TRAILING_ACTIVATE_PCT&#39;]<br>        TRAILING_STOP_PCT = best_params[&#39;TRAILING_STOP_PCT&#39;]<br>        roi_at_50 = best_params[&#39;roi_at_50&#39;]<br>        roi_at_100 = best_params[&#39;roi_at_100&#39;]<br>        roi_at_150 = best_params[&#39;roi_at_150&#39;]<br>        roi_at_200 = best_params[&#39;roi_at_200&#39;]<br>        roi_at_300 = best_params[&#39;roi_at_300&#39;]<br>        roi_at_500 = best_params[&#39;roi_at_500&#39;]<br><br>        def init(self):<br>            super().init()<br>            self.signal1 = self.I(SIGNAL_3, self.data)<br>            self.entry_time = Timestamp.now()<br>            self.max_profit = 0<br><br>        def next(self):<br>            super().next() <br>            if (self.signal1 == 1):<br>                <br>                sl_price = self.data.Close[-1] * (1 - (self.sl_pct_l * 0.001))<br>                tp_price = self.data.Close[-1] * (1 + (self.tp_pct_l * 0.001))<br>                limit_price_l = tp_price * 0.994<br><br>                self.position.is_long<br>                self.buy(sl=sl_price, limit=limit_price_l, tp=tp_price)<br>                <br>                if self.position.is_long:<br>                    self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                <br>                # Calculate current profit<br>                # current_profit = self.trades[0].pl_pct<br><br>                # Check for trailing stop loss based on current profit<br>                if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                    self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                    trailing_stop_price = self.trades[0].entry_price * (1 + (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                    sl_price = min((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                    time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br><br>                    # Check for time interval-based selling<br>                    if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                        self.position.close()<br><br>            elif (self.signal1 == 2):<br>                <br>                sl_price = self.data.Close[-1] * (1 + (self.sl_pct_s * 0.001))<br>                tp_price = self.data.Close[-1] * (1 - (self.tp_pct_s * 0.001))<br>                limit_price_s = tp_price * 1.004<br><br>                self.position.is_short<br>                self.sell(sl=sl_price, limit=limit_price_s, tp=tp_price)<br>                <br>                if self.position.is_short:<br>                    self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br>                <br>                # Calculate current profit<br>                # current_profit = self.trades[0].pl_pct<br><br>                # Check for trailing stop loss based on current profit<br>                if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                    self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                    trailing_stop_price = self.trades[0].entry_price * (1 - (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                    sl_price = max((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                    time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br><br>                # Check for time interval-based selling<br>                if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                    self.position.close()<br>                elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                    self.position.close()<br>                elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                    self.position.close()<br>                elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                    self.position.close()<br><br><br>    # Run backtest<br>    bt_3 = Backtest(df_ens, MyCandlesStrat_3, cash=100000, commission=.001, margin= (1/MyCandlesStrat_3.margin_leverage), exclusive_orders=False)<br>    stat_3 = bt_3.run()<br>    print(&quot;backtest one done at 226 line - &quot;, stat_3)<br><br>    # custom_assets = {}<br>    if ((stat_3[&#39;Return [%]&#39;] &gt; (stat_3[&#39;Buy &amp; Hold Return [%]&#39;] * 3)) <br>        &amp; (stat_3[&#39;Profit Factor&#39;] &gt; 1.0) <br>        &amp; (stat_3[&#39;Max. Drawdown [%]&#39;] &gt; -40)<br>        &amp; (stat_3[&#39;Win Rate [%]&#39;] &gt; 55)<br>        &amp; (stat_3[&#39;Return [%]&#39;] &gt; 0)):<br>        file_prefix = file_path.split(&#39;/&#39;)[-1].split(&#39;.&#39;)[0]<br>        <br>        best_params = {&#39;Optimizer&#39;: &#39;1st backtest - Expectancy&#39;,<br>                       &#39;model_trained_on&#39;: model_name,<br>        &#39;OptimizerResult_Cross&#39;: f&quot;For {file_prefix}/USDT:USDT backtest was done from {stat_3[&#39;Start&#39;]} upto {stat_3[&#39;End&#39;]} for a duration of {stat_3[&#39;Duration&#39;]} using time frame of {timeframe} with Win Rate % - {round(stat_3[&#39;Win Rate [%]&#39;],2)}, Return % - {round(stat_3[&#39;Return [%]&#39;],3)},Expectancy % - {round(stat_3[&#39;Expectancy [%]&#39;],5)} and Sharpe Ratio - {round(stat_3[&#39;Sharpe Ratio&#39;],4)}.&quot;,<br>        &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_long&#39;: 24,<br>        &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 15,<br>        &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 25,<br>        &#39;BEST_LIMIT_ORDER_limit_short&#39;: 24,<br>        &#39;BEST_LEVERAGE_margin_leverage&#39;: 1,<br>        &#39;TRAILING_ACTIVATE_PCT&#39;: 10,<br>        &#39;TRAILING_STOP_PCT&#39; : 5,<br>        &#39;roi_at_50&#39; : 24,<br>        &#39;roi_at_100&#39; : 20,<br>        &#39;roi_at_150&#39; : 18,<br>        &#39;roi_at_200&#39; : 15,<br>        &#39;roi_at_300&#39; : 13,<br>        &#39;roi_at_500&#39; : 10}<br><br>        key_mapping = {<br>            &#39;Optimizer&#39;: &#39;Optimizer_used&#39;,<br>            &#39;model_trained_on&#39;: &#39;model_name&#39;,<br>            &#39;OptimizerResult_Cross&#39;: &#39;Optimizer_result&#39;,<br>            &#39;BEST_STOP_LOSS_sl_pct_long&#39;: &#39;stop_loss_percent_long&#39;,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: &#39;take_profit_percent_long&#39;,<br>            &#39;BEST_LIMIT_ORDER_limit_long&#39;: &#39;limit_long&#39;,<br>            &#39;BEST_STOP_LOSS_sl_pct_short&#39;: &#39;stop_loss_percent_short&#39;,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: &#39;take_profit_percent_short&#39;,<br>            &#39;BEST_LIMIT_ORDER_limit_short&#39;: &#39;limit_short&#39;,<br>            &#39;BEST_LEVERAGE_margin_leverage&#39;: &#39;margin_leverage&#39;,<br>            &#39;TRAILING_ACTIVATE_PCT&#39;: &#39;TRAILING_ACTIVATE_PCT&#39;,<br>            &#39;TRAILING_STOP_PCT&#39; : &#39;TRAILING_STOP_PCT&#39;,<br>            &#39;roi_at_50&#39; : &#39;roi_at_50&#39;,<br>            &#39;roi_at_100&#39; : &#39;roi_at_100&#39;,<br>            &#39;roi_at_150&#39; :&#39;roi_at_150&#39;,<br>            &#39;roi_at_200&#39; : &#39;roi_at_200&#39;,<br>            &#39;roi_at_300&#39; : &#39;roi_at_300&#39;,<br>            &#39;roi_at_500&#39; : &#39;roi_at_500&#39;<br>        }<br>        custom_assets = load_custom_assets()<br>        transformed_params = {}<br>        for old_key, value in best_params.items():<br>            new_key = key_mapping.get(old_key, old_key)<br>            transformed_params[new_key] = value<br><br>        new_key = file_prefix + &quot;/USDT:USDT&quot;<br>        # custom_assets[new_key] = transformed_params<br>        # Update or add new entry to custom_assets<br><br>        if new_key in custom_assets:<br>            # Update existing entry<br>            for key, value in transformed_params.items():<br>                if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                    transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>            custom_assets[new_key].update(transformed_params)<br>        else:<br>            # Add new entry<br>            # Multiply numerical values by 0.001 for new entry if value &gt; 1<br>            for key, value in transformed_params.items():<br>                if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                    transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>            custom_assets[new_key] = transformed_params<br>        <br>        # Save custom_assets to JSON file<br>        save_custom_assets(custom_assets)<br>        print(custom_assets)<br>    else:<br>        # Optimization<br>        def optimize_strategy():<br>            # Optimization Params<br>            optimizer = &#39;Win Rate [%]&#39;<br><br>            stats = bt_3.optimize(<br>                sl_pct_l = range(6,100, 2), # (5,10,15,20,25,30,40,50,75,100)<br>                tp_pct_l =  range(40,100, 2), # (0.005, 0.01, 0.015, 0.02, 0.025, 0.03, 0.04, 0.05, 0.075, 0.1)<br>                # limit_l =  (4,9,14,19,24,29,39,49,74,90),#  (0.004, 0.009, 0.014, 0.019, 0.024, 0.029, 0.039, 0.049, 0.074, 0.09)<br>                sl_pct_s = range(6,100, 2),<br>                tp_pct_s =  range(40,100, 2),<br>                # limit_s =  (4,9,14,19,24,29,39,49,74,90),<br>                margin_leverage = range(1, 8),<br>                TRAILING_ACTIVATE_PCT = range(6,100,2),<br>                TRAILING_STOP_PCT = range(6,100,2),<br>                roi_at_50 = range(6,100,2),<br>                roi_at_100 = range(6,100,2),<br>                roi_at_150 = range(6,100,2),<br>                roi_at_200 = range(6,100,2),<br>                roi_at_300 = range(6,100,2),<br>                roi_at_500 = range(6,100,2),<br>                constraint=lambda p: ( (p.sl_pct_l &gt; (p.tp_pct_l) ) and <br>                                      ((p.sl_pct_s) &gt; (p.tp_pct_s)) and <br>                                      (p.roi_at_50 &gt; p.roi_at_100) and (p.roi_at_100 &gt; p.roi_at_150) and <br>                                      (p.roi_at_150 &gt; p.roi_at_200) and (p.roi_at_200 &gt; p.roi_at_300) and (p.roi_at_300 &gt; p.roi_at_500) and<br>                                     (p.TRAILING_ACTIVATE_PCT &gt; p.TRAILING_STOP_PCT)),<br>                maximize = optimizer,<br>                return_optimization=True,<br>                method = &#39;skopt&#39;,<br>                max_tries = 120 # 20% for 0.2 and 100% for 1.0, this applys when not using &#39;skopt&#39; method, for &#39;skopt&#39; number starts from 1 to 200 max epochs <br>            )<br><br>            # Extract the optimization results<br>            best_params = {<br>                &#39;Optimizer&#39;: optimizer,<br>                &#39;model_trained_on&#39;: model_name,<br>                &#39;OptimizerResult_Cross&#39;: stats[0][optimizer],<br>                &#39;BEST_STOP_LOSS_sl_pct_long&#39;: stats[1].x[0],<br>                &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: stats[1].x[1] ,<br>                &#39;BEST_LIMIT_ORDER_limit_long&#39;: stats[1].x[1] * 0.997,<br>                &#39;BEST_STOP_LOSS_sl_pct_short&#39;: stats[1].x[2] ,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: stats[1].x[3] ,<br>                &#39;BEST_LIMIT_ORDER_limit_short&#39;: stats[1].x[3] * 0.997,<br>                &#39;BEST_LEVERAGE_margin_leverage&#39;: stats[1].x[4],<br>                &#39;TRAILING_ACTIVATE_PCT&#39;: stats[1].x[5],<br>                &#39;TRAILING_STOP_PCT&#39; : stats[1].x[6],<br>                &#39;roi_at_50&#39; : stats[1].x[7],<br>                &#39;roi_at_100&#39; : stats[1].x[8],<br>                &#39;roi_at_150&#39; : stats[1].x[9],<br>                &#39;roi_at_200&#39; : stats[1].x[10],<br>                &#39;roi_at_300&#39; : stats[1].x[11],<br>                &#39;roi_at_500&#39; : stats[1].x[12]<br>                # &#39;BEST_STOP_LOSS_sl_pct_long&#39;: stats._strategy.sl_pct_l,<br>                # &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: stats._strategy.tp_pct_l,<br>                # &#39;BEST_LIMIT_ORDER_limit_long&#39;: stats._strategy.tp_pct_l * 0.998,<br>                # &#39;BEST_STOP_LOSS_sl_pct_short&#39;: stats._strategy.sl_pct_s,<br>                # &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: stats._strategy.tp_pct_s,<br>                # &#39;BEST_LIMIT_ORDER_limit_short&#39;: stats._strategy.sl_pct_s * 0.998,<br>                # &#39;BEST_LEVERAGE_margin_leverage&#39;: stats._strategy.margin_leverage<br>            }<br>            <br>            return best_params<br><br><br>        # Obtain best parameters<br>        best_params = optimize_strategy()<br>        print(&quot;best_params line 322 &quot;, best_params)<br><br>        if best_params:<br>            print(best_params)<br>        else:<br>            best_params = {&#39;Optimizer&#39;: &#39;Return [%]&#39;,<br>                           &#39;model_trained_on&#39;: model_name,<br>            &#39;OptimizerResult_Cross&#39;: 617.5341106880867,<br>            &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 0.025,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 0.025,<br>            &#39;BEST_LIMIT_ORDER_limit_long&#39;: 0.024,<br>            &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 0.025,<br>            &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 0.025,<br>            &#39;BEST_LIMIT_ORDER_limit_short&#39;: 0.024,<br>            &#39;TRAILING_ACTIVATE_PCT&#39;: 10,<br>            &#39;TRAILING_STOP_PCT&#39; : 5,<br>            &#39;roi_at_50&#39; : 24,<br>            &#39;roi_at_100&#39; : 2,<br>            &#39;roi_at_150&#39; : 18,<br>            &#39;roi_at_200&#39; : 15,<br>            &#39;roi_at_300&#39; : 13,<br>            &#39;roi_at_500&#39; : 10}<br><br>        # Define SIGNAL_11 function<br>        def SIGNAL_11(df_ens):<br>            return df_ens[&#39;vcews&#39;]<br><br>        # Define MyCandlesStrat_11 class<br>        class MyCandlesStrat_11(Strategy):  <br>            sl_pct_l = best_params[&#39;BEST_STOP_LOSS_sl_pct_long&#39;]<br>            tp_pct_l = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_long&#39;]<br>            limit_l = best_params[&#39;BEST_LIMIT_ORDER_limit_long&#39;]<br>            sl_pct_s = best_params[&#39;BEST_STOP_LOSS_sl_pct_short&#39;]<br>            tp_pct_s = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_short&#39;]<br>            limit_s = best_params[&#39;BEST_LIMIT_ORDER_limit_short&#39;]<br>            margin_leverage = best_params[&#39;BEST_LEVERAGE_margin_leverage&#39;]<br>            TRAILING_ACTIVATE_PCT = best_params[&#39;TRAILING_ACTIVATE_PCT&#39;]<br>            TRAILING_STOP_PCT = best_params[&#39;TRAILING_STOP_PCT&#39;]<br>            roi_at_50 = best_params[&#39;roi_at_50&#39;]<br>            roi_at_100 = best_params[&#39;roi_at_100&#39;]<br>            roi_at_150 = best_params[&#39;roi_at_150&#39;]<br>            roi_at_200 = best_params[&#39;roi_at_200&#39;]<br>            roi_at_300 = best_params[&#39;roi_at_300&#39;]<br>            roi_at_500 = best_params[&#39;roi_at_500&#39;]<br><br>            def init(self):<br>                super().init()<br>                self.signal1 = self.I(SIGNAL_11, self.data)<br>                self.entry_time = Timestamp.now()<br>                self.max_profit = 0<br><br>            def next(self):<br>                super().next() <br>                if (self.signal1 == 1):<br><br>                    sl_price = self.data.Close[-1] * (1 - (self.sl_pct_l * 0.001))<br>                    tp_price = self.data.Close[-1] * (1 + (self.tp_pct_l * 0.001))<br>                    limit_price_l = tp_price * 0.994<br><br>                    self.position.is_long<br>                    self.buy(sl=sl_price, limit=limit_price_l, tp=tp_price)<br><br>                    if self.position.is_long:<br>                        self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br><br>                    # Calculate current profit<br>                    # current_profit = self.trades[0].pl_pct<br><br>                    # Check for trailing stop loss based on current profit<br>                    if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                        self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                        trailing_stop_price = self.trades[0].entry_price * (1 + (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                        sl_price = min((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                    # time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br><br>                    # Check for time interval-based selling<br>                    if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                        self.position.close()<br><br>                elif (self.signal1 == 2):<br><br>                    sl_price = self.data.Close[-1] * (1 + (self.sl_pct_s * 0.001))<br>                    tp_price = self.data.Close[-1] * (1 - (self.tp_pct_s * 0.001))<br>                    limit_price_s = tp_price * 1.004<br><br>                    self.position.is_short<br>                    self.sell(sl=sl_price, limit=limit_price_s, tp=tp_price)<br><br>                    if self.position.is_short:<br>                        self.entry_time = self.trades[0].entry_time  # Accessing the current datetime<br><br>                    # Calculate current profit<br>                    # current_profit = self.trades[0].pl_pct<br><br>                    # Check for trailing stop loss based on current profit<br>                    if self.position and self.trades[0].pl_pct &gt;= (self.TRAILING_ACTIVATE_PCT * 0.001):<br>                        self.max_profit = max(self.max_profit, self.trades[0].pl_pct)<br>                        trailing_stop_price = self.trades[0].entry_price * (1 - (self.max_profit - (self.TRAILING_STOP_PCT * 0.001)))<br>                        sl_price = max((self.data.Close[-1] * (1 - (self.TRAILING_STOP_PCT * 0.001))), trailing_stop_price)<br>                        time_spent_by_asset1 = (self.data.index[-1] - self.trades[0].entry_time).total_seconds() / 60<br><br>                    # Check for time interval-based selling<br>                    if self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 50) and (self.trades[0].pl_pct &gt; (self.roi_at_50 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 50) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 100) and (self.trades[0].pl_pct &gt; (self.roi_at_100 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 100) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 150) and (self.trades[0].pl_pct &gt; (self.roi_at_150 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 150) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 200) and (self.trades[0].pl_pct &gt; (self.roi_at_200 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 200) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt;= 300) and (self.trades[0].pl_pct &gt; (self.roi_at_300 * 0.001)):<br>                        self.position.close()<br>                    elif self.position  and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt; 300) and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&lt; 950) and (self.trades[0].pl_pct &gt; (self.roi_at_500 * 0.001)):<br>                        self.position.close()<br>                    elif self.position and ((self.data.index[-1] - self.trades[0].entry_time).total_seconds()  *  0.0166&gt;= 950):<br>                        self.position.close()<br><br><br>        # Run backtest with optimized parameters<br>        bt_11 = Backtest(df_ens, MyCandlesStrat_11, cash=100000, commission=.001, margin=(1 / MyCandlesStrat_11.margin_leverage), exclusive_orders=False)<br>        stat_11 = bt_11.run()<br><br>        print(&quot;stat_11 line 388 - &quot;, stat_11)<br><br>        # Additional processing for custom_assets<br>        # custom_assets = {}<br>        if ((stat_11[&#39;Return [%]&#39;] &gt; (stat_11[&#39;Buy &amp; Hold Return [%]&#39;] * 3)) <br>            &amp; (stat_11[&#39;Profit Factor&#39;] &gt; 1.0)<br>            &amp; (stat_11[&#39;Max. Drawdown [%]&#39;] &gt; -35)<br>            &amp; (stat_11[&#39;Win Rate [%]&#39;] &gt; 52)<br>            &amp; (stat_11[&#39;Return [%]&#39;] &gt; 0)):<br>            file_prefix = file_path.split(&#39;/&#39;)[-1].split(&#39;.&#39;)[0]<br>            <br>            print(f&quot;second backtest success for {file_prefix}/USDT:USDT with Win Rate % of {stat_11[&#39;Win Rate [%]&#39;]} and with Return in % of {stat_11[&#39;Return [%]&#39;]}&quot; )<br>            <br>            <br>            best_params = {&#39;Optimizer&#39;: &#39;2nd backtest with Expectancy&#39;,<br>            # &#39;OptimizerResult_Cross&#39;: f&quot;2nd backtest, Sharpe Ratio - {stat_11[&#39;Sharpe Ratio&#39;]}, Returns % - {stat_11[&#39;Return [%]&#39;]}, Win Rate % - {stat_11[&#39;Win Rate [%]&#39;]}&quot;,<br>                           &#39;model_trained_on&#39;: model_name,<br>            &#39;OptimizerResult_Cross&#39;: f&quot;For {file_prefix}/USDT:USDT backtest was done from {stat_11[&#39;Start&#39;]} upto {stat_11[&#39;End&#39;]} for a duration of {stat_11[&#39;Duration&#39;]} using time frame of {timeframe} with Win Rate % - {round(stat_11[&#39;Win Rate [%]&#39;],2)}, Return % - {round(stat_11[&#39;Return [%]&#39;],3)}, Expectancy % - {round(stat_11[&#39;Expectancy [%]&#39;],5)} and Sharpe Ratio - {round(stat_11[&#39;Sharpe Ratio&#39;],3)}.&quot;,<br>            &#39;BEST_STOP_LOSS_sl_pct_long&#39;: MyCandlesStrat_11.sl_pct_l.tolist(),<br>            &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: MyCandlesStrat_11.tp_pct_l.tolist(),<br>            &#39;BEST_LIMIT_ORDER_limit_long&#39;:  round(MyCandlesStrat_11.tp_pct_l.tolist() * 0.996, 2),<br>            &#39;BEST_STOP_LOSS_sl_pct_short&#39;: MyCandlesStrat_11.sl_pct_s.tolist(),<br>            &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: MyCandlesStrat_11.tp_pct_s.tolist(),<br>            &#39;BEST_LIMIT_ORDER_limit_short&#39;: round(MyCandlesStrat_11.sl_pct_s.tolist() * 0.996,2),<br>            &#39;BEST_LEVERAGE_margin_leverage&#39;: MyCandlesStrat_11.margin_leverage.tolist(),<br>            &#39;TRAILING_ACTIVATE_PCT&#39;: MyCandlesStrat_11.TRAILING_ACTIVATE_PCT.tolist(),<br>            &#39;TRAILING_STOP_PCT&#39; : MyCandlesStrat_11.TRAILING_STOP_PCT.tolist(),<br>            &#39;roi_at_50&#39; : MyCandlesStrat_11.roi_at_50.tolist(),<br>            &#39;roi_at_100&#39; : MyCandlesStrat_11.roi_at_100.tolist(),<br>            &#39;roi_at_150&#39; :MyCandlesStrat_11.roi_at_150.tolist(),<br>            &#39;roi_at_200&#39; : MyCandlesStrat_11.roi_at_200.tolist(),<br>            &#39;roi_at_300&#39; : MyCandlesStrat_11.roi_at_300.tolist(),<br>            &#39;roi_at_500&#39; : MyCandlesStrat_11.roi_at_500.tolist()<br>                          }<br>            <br>            # print(&quot;best_params under stat_11 &quot;, best_params)<br><br>            key_mapping = {<br>                &#39;Optimizer&#39;: &#39;Optimizer_used&#39;,<br>                &#39;model_trained_on&#39;: &#39;model_name&#39;,<br>                &#39;OptimizerResult_Cross&#39;: &#39;Optimizer_result&#39;,<br>                &#39;BEST_STOP_LOSS_sl_pct_long&#39;: &#39;stop_loss_percent_long&#39;,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: &#39;take_profit_percent_long&#39;,<br>                &#39;BEST_LIMIT_ORDER_limit_long&#39;: &#39;limit_long&#39;,<br>                &#39;BEST_STOP_LOSS_sl_pct_short&#39;: &#39;stop_loss_percent_short&#39;,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: &#39;take_profit_percent_short&#39;,<br>                &#39;BEST_LIMIT_ORDER_limit_short&#39;: &#39;limit_short&#39;,<br>                &#39;BEST_LEVERAGE_margin_leverage&#39;: &#39;margin_leverage&#39;,<br>                &#39;TRAILING_ACTIVATE_PCT&#39;: &#39;TRAILING_ACTIVATE_PCT&#39;,<br>                &#39;TRAILING_STOP_PCT&#39; : &#39;TRAILING_STOP_PCT&#39;,<br>                &#39;roi_at_50&#39; : &#39;roi_at_50&#39;,<br>                &#39;roi_at_100&#39; : &#39;roi_at_100&#39;,<br>                &#39;roi_at_150&#39; :&#39;roi_at_150&#39;,<br>                &#39;roi_at_200&#39; : &#39;roi_at_200&#39;,<br>                &#39;roi_at_300&#39; : &#39;roi_at_300&#39;,<br>                &#39;roi_at_500&#39; : &#39;roi_at_500&#39;<br>            }<br>            # Update or add new entry to custom_assets<br>            custom_assets = load_custom_assets()<br>            <br>            transformed_params = {}<br>            for old_key, value in best_params.items():<br>                new_key = key_mapping.get(old_key, old_key)<br>                transformed_params[new_key] = value<br><br>            new_key = file_prefix + &quot;/USDT:USDT&quot;<br>            # custom_assets[new_key] = transformed_params<br><br>            if new_key in custom_assets:<br>                # Update existing entry<br>                for key, value in transformed_params.items():<br>                    if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                        transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>                custom_assets[new_key].update(transformed_params)<br>            else:<br>                # Add new entry<br>                # Multiply numerical values by 0.001 for new entry if value &gt; 1<br>                for key, value in transformed_params.items():<br>                    if isinstance(value, (int, float)) and key != &#39;margin_leverage&#39; and value &gt;= 1:<br>                        transformed_params[key] = round(transformed_params[key] * 0.001, 5)<br>                custom_assets[new_key] = transformed_params<br><br>            # Save custom_assets to JSON file<br>            save_custom_assets(custom_assets)<br>        print(&quot;custom_assets after save &quot;, custom_assets)<br><br>    return df, symbol_name, custom_assets<br>    # except Exception as e:<br>    #     # Print the error message<br>    #     print(f&quot;Error processing {file_path}: {e}&quot;)<br>    #     print(&quot;custom assets at error level line 361 &quot;, custom_assets)<br>    #     # Return None for both DataFrame and symbol name to indicate failure<br>    #     return None, symbol_name, custom_assets<br><br><br># Define a thread worker function<br>def thread_worker(file):<br>    result = process_json(file)<br>    return result<br><br>def main():<br>    # Get a list of all JSON files in the folder<br>    # NOTE: make sure to mention the tradingview downloaded data folder here<br>    json_files = [f&quot;./tradingview_crypto_assets_15m/{file}&quot; for file in os.listdir(&quot;./tradingview_crypto_assets_15m/&quot;) if file.endswith(&quot;.json&quot;)]<br>    # print(json_files)<br><br>    # Get the number of available CPU cores<br>    num_cores = os.cpu_count()<br>    # print(num_cores)<br><br>    # Set the max_workers parameter based on the number of CPU cores<br>    max_workers = (num_cores) if (num_cores &gt; 1) else 1  # Default to 1 if CPU count cannot be determined<br>    # max_workers = 1  # Default to 1 if CPU count cannot be determined<br>    print(&#39;max workers (Total Number of CPU cores to be used) - &#39;, max_workers)<br><br>    # Process JSON files in parallel using multi-core processing<br>    with ThreadPoolExecutor(max_workers=max_workers) as executor:<br>        # Submit threads for each JSON file<br>        futures = [executor.submit(thread_worker, file) for file in json_files]<br><br>    # Wait for all threads to complete<br>    results = [future.result() for future in futures]<br><br>    # Process the results as needed<br>    for result in results:<br>        if result is None:<br>            continue<br>        df, symbol_name, custom_assets = result<br>        print(f&quot;Processed {symbol_name}&quot;)<br>        print(f&#39;custom_assets &#39;, custom_assets)<br>        if custom_data:  # Check if custom_data is not None<br>            custom_assets.update(custom_data)<br>            <br># Define a function to continuously run the loop<br>def run_continuous_loop():<br>    while True:<br>        main()<br><br># Start the continuous loop in a separate thread<br>thread = threading.Thread(target=run_continuous_loop)<br>thread.start()<br></pre><pre>output:<br>max workers (Total Number of CPU cores to be used) -  4<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   85.237208<br>Equity Final [$]                  45917.74697<br>Equity Peak [$]                  119511.93047<br>Return [%]                         -54.082253<br>Buy &amp; Hold Return [%]              -27.134777<br>Return (Ann.) [%]                  -98.222272<br>Volatility (Ann.) [%]                3.390676<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -63.780594<br>Avg. Drawdown [%]                   -7.944307<br>Max. Drawdown Duration       65 days 12:15:00<br>Avg. Drawdown Duration        6 days 13:06:00<br># Trades                                  704<br>Win Rate [%]                        42.471591<br>Best Trade [%]                       7.078622<br>Worst Trade [%]                     -5.342172<br>Avg. Trade [%]                      -0.100692<br>Max. Trade Duration           0 days 16:00:00<br>Avg. Trade Duration           0 days 02:09:00<br>Profit Factor                        0.910244<br>Expectancy [%]                      -0.083294<br>SQN                                 -1.448338<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                          Size  Ent...<br>dtype: object<br><br>Backtest.optimize:   0%|          | 0/120 [00:00&lt;?, ?it/s]<br><br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   80.894367<br>Equity Final [$]                 71868.014917<br>Equity Peak [$]                 154437.708717<br>Return [%]                         -28.131985<br>Buy &amp; Hold Return [%]               10.347826<br>Return (Ann.) [%]                  -83.933331<br>Volatility (Ann.) [%]              104.791887<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                   -55.05933<br>Avg. Drawdown [%]                   -9.306736<br>Max. Drawdown Duration       36 days 14:30:00<br>Avg. Drawdown Duration        4 days 05:51:00<br># Trades                                 1080<br>Win Rate [%]                         42.12963<br>Best Trade [%]                      17.226306<br>Worst Trade [%]                     -9.397662<br>Avg. Trade [%]                      -0.039244<br>Max. Trade Duration           0 days 13:45:00<br>Avg. Trade Duration           0 days 01:26:00<br>Profit Factor                        0.980051<br>Expectancy [%]                      -0.018739<br>SQN                                 -0.431999<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                           Size  En...<br>dtype: object<br><br>Backtest.optimize:   0%|          | 0/120 [00:00&lt;?, ?it/s]<br><br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                    77.08184<br>Equity Final [$]                 88362.186342<br>Equity Peak [$]                 125118.711904<br>Return [%]                         -11.637814<br>Buy &amp; Hold Return [%]               -7.367375<br>Return (Ann.) [%]                  -59.110683<br>Volatility (Ann.) [%]               52.239336<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -33.108562<br>Avg. Drawdown [%]                   -5.014365<br>Max. Drawdown Duration       28 days 20:15:00<br>Avg. Drawdown Duration        3 days 10:31:00<br># Trades                                  747<br>Win Rate [%]                        46.318608<br>Best Trade [%]                       4.152216<br>Worst Trade [%]                     -2.642371<br>Avg. Trade [%]                      -0.029744<br>Max. Trade Duration           1 days 15:15:00<br>Avg. Trade Duration           0 days 02:42:00<br>Profit Factor                        0.980657<br>Expectancy [%]                      -0.015375<br>SQN                                 -0.276569<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                        Size  Entry...<br>dtype: object<br><br>Backtest.optimize:   0%|          | 0/120 [00:00&lt;?, ?it/s]<br><br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   88.247098<br>Equity Final [$]                 111201.92929<br>Equity Peak [$]                  130506.62718<br>Return [%]                          11.201929<br>Buy &amp; Hold Return [%]              -22.347518<br>Return (Ann.) [%]                   40.306102<br>Volatility (Ann.) [%]              146.602571<br>Sharpe Ratio                         0.274934<br>Sortino Ratio                        0.738824<br>Calmar Ratio                          1.08927<br>Max. Drawdown [%]                  -37.002843<br>Avg. Drawdown [%]                    -2.84137<br>Max. Drawdown Duration       48 days 13:15:00<br>Avg. Drawdown Duration        1 days 18:11:00<br># Trades                                  776<br>Win Rate [%]                        55.283505<br>Best Trade [%]                       5.414731<br>Worst Trade [%]                     -2.317394<br>Avg. Trade [%]                       0.121652<br>Max. Trade Duration           1 days 20:00:00<br>Avg. Trade Duration           0 days 03:35:00<br>Profit Factor                        1.205516<br>Expectancy [%]                       0.134008<br>SQN                                  0.307387<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                        Size  Entry...<br>dtype: object<br>{&#39;MATIC/USDT:USDT&#39;: {&#39;Optimizer_used&#39;: &#39;1st backtest - Expectancy&#39;, &#39;model_name&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;Optimizer_result&#39;: &#39;For MATIC/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 61.29, Return % - 128.107,Expectancy % - 0.3474 and Sharpe Ratio - 1.1267.&#39;, &#39;stop_loss_percent_long&#39;: 0.015, &#39;take_profit_percent_long&#39;: 0.025, &#39;limit_long&#39;: 0.024, &#39;stop_loss_percent_short&#39;: 0.015, &#39;take_profit_percent_short&#39;: 0.025, &#39;limit_short&#39;: 0.024, &#39;margin_leverage&#39;: 1, &#39;TRAILING_ACTIVATE_PCT&#39;: 0.01, &#39;TRAILING_STOP_PCT&#39;: 0.005, &#39;roi_at_50&#39;: 0.024, &#39;roi_at_100&#39;: 0.02, &#39;roi_at_150&#39;: 0.018, &#39;roi_at_200&#39;: 0.015, &#39;roi_at_300&#39;: 0.013, &#39;roi_at_500&#39;: 0.01}, &#39;BAL/USDT:USDT&#39;: {&#39;Optimizer_used&#39;: &#39;1st backtest - Expectancy&#39;, &#39;model_name&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;Optimizer_result&#39;: &#39;For BAL/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 60.18, Return % - 122.4,Expectancy % - 0.24871 and Sharpe Ratio - 1.1628.&#39;, &#39;stop_loss_percent_long&#39;: 0.015, &#39;take_profit_percent_long&#39;: 0.025, &#39;limit_long&#39;: 0.024, &#39;stop_loss_percent_short&#39;: 0.015, &#39;take_profit_percent_short&#39;: 0.025, &#39;limit_short&#39;: 0.024, &#39;margin_leverage&#39;: 1, &#39;TRAILING_ACTIVATE_PCT&#39;: 0.01, &#39;TRAILING_STOP_PCT&#39;: 0.005, &#39;roi_at_50&#39;: 0.024, &#39;roi_at_100&#39;: 0.02, &#39;roi_at_150&#39;: 0.018, &#39;roi_at_200&#39;: 0.015, &#39;roi_at_300&#39;: 0.013, &#39;roi_at_500&#39;: 0.01}, &#39;LINK/USDT:USDT&#39;: {&#39;Optimizer_used&#39;: &#39;2nd backtest with Expectancy&#39;, &#39;model_name&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;Optimizer_result&#39;: &#39;For LINK/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 56.15, Return % - 73.826, Expectancy % - 0.37251 and Sharpe Ratio - 1.03.&#39;, &#39;stop_loss_percent_long&#39;: 0.09, &#39;take_profit_percent_long&#39;: 0.083, &#39;limit_long&#39;: 0.08267, &#39;stop_loss_percent_short&#39;: 0.041, &#39;take_profit_percent_short&#39;: 0.041, &#39;limit_short&#39;: 0.04084, &#39;margin_leverage&#39;: 1, &#39;TRAILING_ACTIVATE_PCT&#39;: 0.09, &#39;TRAILING_STOP_PCT&#39;: 0.007, &#39;roi_at_50&#39;: 0.052, &#39;roi_at_100&#39;: 0.087, &#39;roi_at_150&#39;: 0.082, &#39;roi_at_200&#39;: 0.071, &#39;roi_at_300&#39;: 0.042, &#39;roi_at_500&#39;: 0.033}, &#39;XMR/USDT:USDT&#39;: {&#39;Optimizer_used&#39;: &#39;1st backtest - Expectancy&#39;, &#39;model_name&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;Optimizer_result&#39;: &#39;For XMR/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 62.03, Return % - 19.962,Expectancy % - 0.25496 and Sharpe Ratio - 0.5918.&#39;, &#39;stop_loss_percent_long&#39;: 0.015, &#39;take_profit_percent_long&#39;: 0.025, &#39;limit_long&#39;: 0.024, &#39;stop_loss_percent_short&#39;: 0.015, &#39;take_profit_percent_short&#39;: 0.025, &#39;limit_short&#39;: 0.024, &#39;margin_leverage&#39;: 1, &#39;TRAILING_ACTIVATE_PCT&#39;: 0.01, &#39;TRAILING_STOP_PCT&#39;: 0.005, &#39;roi_at_50&#39;: 0.024, &#39;roi_at_100&#39;: 0.02, &#39;roi_at_150&#39;: 0.018, &#39;roi_at_200&#39;: 0.015, &#39;roi_at_300&#39;: 0.013, &#39;roi_at_500&#39;: 0.01}, &#39;QNT/USDT:USDT&#39;: {&#39;Optimizer_used&#39;: &#39;1st backtest - Expectancy&#39;, &#39;model_name&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;Optimizer_result&#39;: &#39;For QNT/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 55.28, Return % - 11.202,Expectancy % - 0.13401 and Sharpe Ratio - 0.2749.&#39;, &#39;stop_loss_percent_long&#39;: 0.015, &#39;take_profit_percent_long&#39;: 0.025, &#39;limit_long&#39;: 0.024, &#39;stop_loss_percent_short&#39;: 0.015, &#39;take_profit_percent_short&#39;: 0.025, &#39;limit_short&#39;: 0.024, &#39;margin_leverage&#39;: 1, &#39;TRAILING_ACTIVATE_PCT&#39;: 0.01, &#39;TRAILING_STOP_PCT&#39;: 0.005, &#39;roi_at_50&#39;: 0.024, &#39;roi_at_100&#39;: 0.02, &#39;roi_at_150&#39;: 0.018, &#39;roi_at_200&#39;: 0.015, &#39;roi_at_300&#39;: 0.013, &#39;roi_at_500&#39;: 0.01}}<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   85.251541<br>Equity Final [$]                 66100.658735<br>Equity Peak [$]                 108258.888017<br>Return [%]                         -33.899341<br>Buy &amp; Hold Return [%]               -34.64684<br>Return (Ann.) [%]                  -88.874627<br>Volatility (Ann.) [%]                10.41704<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -40.053418<br>Avg. Drawdown [%]                   -5.883064<br>Max. Drawdown Duration       63 days 02:30:00<br>Avg. Drawdown Duration        4 days 20:06:00<br># Trades                                  657<br>Win Rate [%]                        45.053272<br>Best Trade [%]                       4.368109<br>Worst Trade [%]                     -4.271553<br>Avg. Trade [%]                      -0.073639<br>Max. Trade Duration           1 days 19:15:00<br>Avg. Trade Duration           0 days 03:10:00<br>Profit Factor                        0.920052<br>Expectancy [%]                      -0.060695<br>SQN                                 -1.020906<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                         Size  Entr...<br>dtype: object<br><br>Backtest.optimize:   0%|          | 0/120 [00:00&lt;?, ?it/s]<br><br>best_params line 322  {&#39;Optimizer&#39;: &#39;Win Rate [%]&#39;, &#39;model_trained_on&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;OptimizerResult_Cross&#39;: 54.54545454545454, &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 31, &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 71, &#39;BEST_LIMIT_ORDER_limit_long&#39;: 70.787, &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 45, &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 56, &#39;BEST_LIMIT_ORDER_limit_short&#39;: 55.832, &#39;BEST_LEVERAGE_margin_leverage&#39;: 4, &#39;TRAILING_ACTIVATE_PCT&#39;: 91, &#39;TRAILING_STOP_PCT&#39;: 62, &#39;roi_at_50&#39;: 16, &#39;roi_at_100&#39;: 36, &#39;roi_at_150&#39;: 32, &#39;roi_at_200&#39;: 51, &#39;roi_at_300&#39;: 27, &#39;roi_at_500&#39;: 30}<br>{&#39;Optimizer&#39;: &#39;Win Rate [%]&#39;, &#39;model_trained_on&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;OptimizerResult_Cross&#39;: 54.54545454545454, &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 31, &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 71, &#39;BEST_LIMIT_ORDER_limit_long&#39;: 70.787, &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 45, &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 56, &#39;BEST_LIMIT_ORDER_limit_short&#39;: 55.832, &#39;BEST_LEVERAGE_margin_leverage&#39;: 4, &#39;TRAILING_ACTIVATE_PCT&#39;: 91, &#39;TRAILING_STOP_PCT&#39;: 62, &#39;roi_at_50&#39;: 16, &#39;roi_at_100&#39;: 36, &#39;roi_at_150&#39;: 32, &#39;roi_at_200&#39;: 51, &#39;roi_at_300&#39;: 27, &#39;roi_at_500&#39;: 30}<br>best_params line 322  {&#39;Optimizer&#39;: &#39;Win Rate [%]&#39;, &#39;model_trained_on&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;OptimizerResult_Cross&#39;: 31.80379746835443, &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 16, &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 47, &#39;BEST_LIMIT_ORDER_limit_long&#39;: 46.859, &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 9, &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 56, &#39;BEST_LIMIT_ORDER_limit_short&#39;: 55.832, &#39;BEST_LEVERAGE_margin_leverage&#39;: 3, &#39;TRAILING_ACTIVATE_PCT&#39;: 35, &#39;TRAILING_STOP_PCT&#39;: 89, &#39;roi_at_50&#39;: 93, &#39;roi_at_100&#39;: 22, &#39;roi_at_150&#39;: 26, &#39;roi_at_200&#39;: 28, &#39;roi_at_300&#39;: 27, &#39;roi_at_500&#39;: 13}<br>{&#39;Optimizer&#39;: &#39;Win Rate [%]&#39;, &#39;model_trained_on&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;OptimizerResult_Cross&#39;: 31.80379746835443, &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 16, &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 47, &#39;BEST_LIMIT_ORDER_limit_long&#39;: 46.859, &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 9, &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 56, &#39;BEST_LIMIT_ORDER_limit_short&#39;: 55.832, &#39;BEST_LEVERAGE_margin_leverage&#39;: 3, &#39;TRAILING_ACTIVATE_PCT&#39;: 35, &#39;TRAILING_STOP_PCT&#39;: 89, &#39;roi_at_50&#39;: 93, &#39;roi_at_100&#39;: 22, &#39;roi_at_150&#39;: 26, &#39;roi_at_200&#39;: 28, &#39;roi_at_300&#39;: 27, &#39;roi_at_500&#39;: 13}<br>stat_11 line 388 -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   91.543643<br>Equity Final [$]                  3470.378194<br>Equity Peak [$]                 205964.422286<br>Return [%]                         -96.529622<br>Buy &amp; Hold Return [%]              -27.134777<br>Return (Ann.) [%]                  -99.999997<br>Volatility (Ann.) [%]                0.096112<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -98.590775<br>Avg. Drawdown [%]                  -12.996703<br>Max. Drawdown Duration       65 days 12:15:00<br>Avg. Drawdown Duration        4 days 12:01:00<br># Trades                                 1122<br>Win Rate [%]                        61.051693<br>Best Trade [%]                       7.078622<br>Worst Trade [%]                     -5.342172<br>Avg. Trade [%]                       0.194594<br>Max. Trade Duration           0 days 23:45:00<br>Avg. Trade Duration           0 days 05:07:00<br>Profit Factor                        1.216832<br>Expectancy [%]                       0.230671<br>SQN                                 -0.781608<br>_strategy                   MyCandlesStrat_11<br>_equity_curve                             ...<br>_trades                           Size  En...<br>dtype: object<br>stat_11 line 388 -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   82.614304<br>Equity Final [$]                   128.433999<br>Equity Peak [$]                 116277.955204<br>Return [%]                         -99.871566<br>Buy &amp; Hold Return [%]               10.347826<br>Return (Ann.) [%]                      -100.0<br>Volatility (Ann.) [%]                0.000003<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -99.900614<br>Avg. Drawdown [%]                  -21.898921<br>Max. Drawdown Duration       72 days 10:30:00<br>Avg. Drawdown Duration       14 days 12:30:00<br># Trades                                 1806<br>Win Rate [%]                        49.612403<br>Best Trade [%]                      17.226306<br>Worst Trade [%]                     -9.397662<br>Avg. Trade [%]                       0.183703<br>Max. Trade Duration           0 days 19:30:00<br>Avg. Trade Duration           0 days 01:54:00<br>Profit Factor                        1.245448<br>Expectancy [%]                       0.206416<br>SQN                                 -2.754739<br>_strategy                   MyCandlesStrat_11<br>_equity_curve                             ...<br>_trades                           Size  En...<br>dtype: object<br>best_params line 322  {&#39;Optimizer&#39;: &#39;Win Rate [%]&#39;, &#39;model_trained_on&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;OptimizerResult_Cross&#39;: 45.22727272727273, &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 31, &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 86, &#39;BEST_LIMIT_ORDER_limit_long&#39;: 85.742, &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 17, &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 40, &#39;BEST_LIMIT_ORDER_limit_short&#39;: 39.88, &#39;BEST_LEVERAGE_margin_leverage&#39;: 4, &#39;TRAILING_ACTIVATE_PCT&#39;: 52, &#39;TRAILING_STOP_PCT&#39;: 65, &#39;roi_at_50&#39;: 22, &#39;roi_at_100&#39;: 55, &#39;roi_at_150&#39;: 92, &#39;roi_at_200&#39;: 76, &#39;roi_at_300&#39;: 94, &#39;roi_at_500&#39;: 65}<br>{&#39;Optimizer&#39;: &#39;Win Rate [%]&#39;, &#39;model_trained_on&#39;: &#39;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&#39;, &#39;OptimizerResult_Cross&#39;: 45.22727272727273, &#39;BEST_STOP_LOSS_sl_pct_long&#39;: 31, &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: 86, &#39;BEST_LIMIT_ORDER_limit_long&#39;: 85.742, &#39;BEST_STOP_LOSS_sl_pct_short&#39;: 17, &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: 40, &#39;BEST_LIMIT_ORDER_limit_short&#39;: 39.88, &#39;BEST_LEVERAGE_margin_leverage&#39;: 4, &#39;TRAILING_ACTIVATE_PCT&#39;: 52, &#39;TRAILING_STOP_PCT&#39;: 65, &#39;roi_at_50&#39;: 22, &#39;roi_at_100&#39;: 55, &#39;roi_at_150&#39;: 92, &#39;roi_at_200&#39;: 76, &#39;roi_at_300&#39;: 94, &#39;roi_at_500&#39;: 65}<br>stat_11 line 388 -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   87.200803<br>Equity Final [$]                  4848.978509<br>Equity Peak [$]                 261083.892993<br>Return [%]                         -95.151021<br>Buy &amp; Hold Return [%]               -7.367375<br>Return (Ann.) [%]                  -99.999985<br>Volatility (Ann.) [%]                1.828415<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -98.874892<br>Avg. Drawdown [%]                  -15.133227<br>Max. Drawdown Duration       70 days 01:15:00<br>Avg. Drawdown Duration        5 days 03:32:00<br># Trades                                  857<br>Win Rate [%]                        45.507585<br>Best Trade [%]                       8.504436<br>Worst Trade [%]                     -3.262618<br>Avg. Trade [%]                       0.073571<br>Max. Trade Duration           2 days 10:30:00<br>Avg. Trade Duration           0 days 07:09:00<br>Profit Factor                        1.099905<br>Expectancy [%]                       0.119072<br>SQN                                  -0.75672<br>_strategy                   MyCandlesStrat_11<br>_equity_curve                             ...<br>_trades                        Size  Entry...<br>dtype: object<br>217/217 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step<br>219/219 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step<br>backtest one done at 226 line -  Start                     2024-03-02 11:45:00<br>End                       2024-05-14 03:45:00<br>Duration                     72 days 16:00:00<br>Exposure Time [%]                   85.624194<br>Equity Final [$]                 85965.146775<br>Equity Peak [$]                 111310.492298<br>Return [%]                         -14.034853<br>Buy &amp; Hold Return [%]              -32.404148<br>Return (Ann.) [%]                  -56.854276<br>Volatility (Ann.) [%]               56.077855<br>Sharpe Ratio                              0.0<br>Sortino Ratio                             0.0<br>Calmar Ratio                              0.0<br>Max. Drawdown [%]                  -49.967758<br>Avg. Drawdown [%]                   -6.646923<br>Max. Drawdown Duration       63 days 03:15:00<br>Avg. Drawdown Duration        4 days 19:56:00<br># Trades                                  724<br>Win Rate [%]                        46.961326<br>Best Trade [%]                       4.150219<br>Worst Trade [%]                     -3.770702<br>Avg. Trade [%]                       0.011281<br>Max. Trade Duration           1 days 08:45:00<br>Avg. Trade Duration           0 days 02:44:00<br>Profit Factor                        1.033081<br>Expectancy [%]                       0.025585<br>SQN                                  -0.43494<br>_strategy                    MyCandlesStrat_3<br>_equity_curve                             ...<br>_trades                         Size  Entr...<br>dtype: object<br><br>Backtest.optimize:   0%|          | 0/120 [00:00&lt;?, ?it/s]<br><br>.................................................................................................................................<br>(output goes on for all the assets and then short listed assets get saved inside custom_assets.txt)<br></pre><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p>The provided Python code appears to be related to backtesting a cryptocurrency trading strategy. Here’s a breakdown of the code functionalities:</p><p><strong>Data Processing:</strong></p><ol><li><strong>Function </strong><strong>process_json:</strong> This function reads a JSON file containing cryptocurrency price data.</li><li><strong>Data Cleaning and Transformation:</strong> It cleans and transforms the data by:</li></ol><ul><li>Renaming columns to standard names (e.g., ‘date’ to ‘Date’).</li><li>Converting the ‘Date’ column to datetime format.</li><li>Setting ‘Date’ as the index.</li><li>Filling missing values in the ‘Close’ column with the previous close price.</li><li>Extracting the symbol name from the ‘symbol’ column.</li></ul><ol><li><strong>Technical Indicator Calculation:</strong> The script calculates various technical indicators like ATR, EMA, RSI, etc., using the ta library (assumed to be imported).</li><li><strong>Feature Engineering:</strong> It creates additional features like returns, volatility, volume-based indicators, and momentum-based indicators.</li><li><strong>Data Scaling:</strong> The script scales the data using MinMaxScaler for better model performance during backtesting.</li><li><strong>Reshaping Data:</strong> The data is reshaped into a format suitable for the trading strategy (e.g., sequences of past price data).</li></ol><p><strong>Backtesting Strategy:</strong></p><ol><li><strong>Function </strong><strong>SIGNAL_3:</strong> This function likely defines the trading signals based on some criteria (not shown in the provided code).</li><li><strong>Class </strong><strong>MyCandlesStrat_3:</strong> This class defines the trading strategy using the Backtrader library (assumed to be imported). Key elements include:</li></ol><ul><li><strong>Stop-loss and Take-profit:</strong> These are set based on predefined percentages (BEST_STOP_LOSS_sl_pct_long, etc.) for long and short positions.</li><li><strong>Limit orders:</strong> These are used to ensure order execution within a specific price range.</li><li><strong>Trailing Stop-loss:</strong> The stop-loss is dynamically adjusted based on current profit to lock in gains.</li><li><strong>Time-based profit taking:</strong> Profits are automatically locked in after a certain time holding the asset.</li><li><strong>Leverage:</strong> The strategy uses a predefined leverage multiplier (BEST_LEVERAGE_margin_leverage).</li></ul><p><strong>Backtesting and Analysis:</strong></p><ol><li><strong>Backtest:</strong> The script performs a backtest on the processed data using the MyCandlesStrat_3 strategy with a starting capital of 100000.</li><li><strong>Performance Metrics:</strong> Backtesting results likely include various performance metrics like returns, Sharpe Ratio, Win Rate, and Drawdown (not explicitly shown in the provided code).</li></ol><p><strong>Conditional Logic:</strong></p><ul><li>The script checks if certain performance conditions are met (high return, good profit factor, etc.).</li><li>If the conditions are satisfied, the script potentially saves the trading strategy parameters for this specific asset.</li></ul><p>Usage of ThreadPoolExecutor class for parallel processing of JSON files. Here&#39;s a breakdown of its functionality:</p><p><strong>1. Thread Worker Function (</strong><strong>thread_worker):</strong></p><ul><li>This function takes a single JSON file path as input (file).</li><li>It calls the process_json function (assumed to be defined elsewhere) to process the JSON data.</li><li>It returns the processed result, likely a Pandas DataFrame (df), symbol name (symbol_name), and potentially other custom data (custom_assets).</li></ul><p><strong>2. Main Function (</strong><strong>main):</strong></p><ul><li>It retrieves a list of all JSON files within a specified folder (./tradingview_crypto_assets_15m/).</li><li>It determines the number of available CPU cores using os.cpu_count().</li><li>It sets the max_workers parameter for the ThreadPoolExecutor based on the CPU cores (using all cores if available, defaulting to 1 otherwise).</li><li>It prints the number of cores to be used for processing.</li><li>It creates a ThreadPoolExecutor with the determined max_workers.</li><li>It iterates through the list of JSON files and submits each file path to the thread pool using executor.submit(thread_worker, file). This creates tasks for each file to be processed concurrently.</li><li>It waits for all submitted tasks (futures) to complete using future.result() and stores the results in a list (results).</li><li>It iterates through the processing results:</li><li>If a result is None, it skips to the next iteration (potentially handling errors).</li><li>Otherwise, it unpacks the result (df, symbol_name, and potentially custom_assets).</li><li>It prints information about the processed symbol and the custom assets (if any).</li><li>It conditionally updates custom_assets with additional custom data (custom_data) if it exists (logic not entirely shown).</li></ul><p><strong>3. Continuous Loop Function (</strong><strong>run_continuous_loop):</strong></p><ul><li>This function defines an infinite loop (while True).</li><li>Inside the loop, it calls the main function, presumably to process a batch of JSON files repeatedly.</li></ul><p><strong>4. Starting the Loop:</strong></p><ul><li>The code creates a separate thread using threading.Thread and sets its target to the run_continuous_loop function.</li><li>Finally, it starts the thread, initiating the continuous processing loop.</li></ul><p><strong>Overall, this code snippet demonstrates parallel processing of JSON files using a thread pool based on CPU cores. The loop continuously processes batches of files.</strong></p><p><strong>The code demonstrates a framework for backtesting a cryptocurrency trading strategy that uses technical indicators and incorporates risk management techniques like stop-loss and trailing stop-loss.</strong></p><p><strong>Disclaimer:</strong></p><ul><li>Always remeber that, Backtesting results may not be indicative of future performance.</li><li>Trading cryptocurrencies involves significant risks, and you should always do your own research before making any investment decisions.</li></ul><h4>custom_assets.txt Output:</h4><pre>{<br>    &quot;MATIC/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For MATIC/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 61.29, Return % - 128.107,Expectancy % - 0.3474 and Sharpe Ratio - 1.1267.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.015,<br>        &quot;take_profit_percent_long&quot;: 0.025,<br>        &quot;limit_long&quot;: 0.024,<br>        &quot;stop_loss_percent_short&quot;: 0.015,<br>        &quot;take_profit_percent_short&quot;: 0.025,<br>        &quot;limit_short&quot;: 0.024,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.01,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.024,<br>        &quot;roi_at_100&quot;: 0.02,<br>        &quot;roi_at_150&quot;: 0.018,<br>        &quot;roi_at_200&quot;: 0.015,<br>        &quot;roi_at_300&quot;: 0.013,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;BAL/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For BAL/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 60.18, Return % - 122.4,Expectancy % - 0.24871 and Sharpe Ratio - 1.1628.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.015,<br>        &quot;take_profit_percent_long&quot;: 0.025,<br>        &quot;limit_long&quot;: 0.024,<br>        &quot;stop_loss_percent_short&quot;: 0.015,<br>        &quot;take_profit_percent_short&quot;: 0.025,<br>        &quot;limit_short&quot;: 0.024,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.01,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.024,<br>        &quot;roi_at_100&quot;: 0.02,<br>        &quot;roi_at_150&quot;: 0.018,<br>        &quot;roi_at_200&quot;: 0.015,<br>        &quot;roi_at_300&quot;: 0.013,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;LINK/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For LINK/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 55.7, Return % - 28.367,Expectancy % - 0.18038 and Sharpe Ratio - 0.8733.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.015,<br>        &quot;take_profit_percent_long&quot;: 0.025,<br>        &quot;limit_long&quot;: 0.024,<br>        &quot;stop_loss_percent_short&quot;: 0.015,<br>        &quot;take_profit_percent_short&quot;: 0.025,<br>        &quot;limit_short&quot;: 0.024,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.01,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.024,<br>        &quot;roi_at_100&quot;: 0.02,<br>        &quot;roi_at_150&quot;: 0.018,<br>        &quot;roi_at_200&quot;: 0.015,<br>        &quot;roi_at_300&quot;: 0.013,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;XMR/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For XMR/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 62.03, Return % - 19.962,Expectancy % - 0.25496 and Sharpe Ratio - 0.5918.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.015,<br>        &quot;take_profit_percent_long&quot;: 0.025,<br>        &quot;limit_long&quot;: 0.024,<br>        &quot;stop_loss_percent_short&quot;: 0.015,<br>        &quot;take_profit_percent_short&quot;: 0.025,<br>        &quot;limit_short&quot;: 0.024,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.01,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.024,<br>        &quot;roi_at_100&quot;: 0.02,<br>        &quot;roi_at_150&quot;: 0.018,<br>        &quot;roi_at_200&quot;: 0.015,<br>        &quot;roi_at_300&quot;: 0.013,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;QNT/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For QNT/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 55.28, Return % - 11.202,Expectancy % - 0.13401 and Sharpe Ratio - 0.2749.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.015,<br>        &quot;take_profit_percent_long&quot;: 0.025,<br>        &quot;limit_long&quot;: 0.024,<br>        &quot;stop_loss_percent_short&quot;: 0.015,<br>        &quot;take_profit_percent_short&quot;: 0.025,<br>        &quot;limit_short&quot;: 0.024,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.01,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.024,<br>        &quot;roi_at_100&quot;: 0.02,<br>        &quot;roi_at_150&quot;: 0.018,<br>        &quot;roi_at_200&quot;: 0.015,<br>        &quot;roi_at_300&quot;: 0.013,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;ETH/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For ETH/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 58.31, Return % - 36.053,Expectancy % - 0.25672 and Sharpe Ratio - 1.2091.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.015,<br>        &quot;take_profit_percent_long&quot;: 0.025,<br>        &quot;limit_long&quot;: 0.024,<br>        &quot;stop_loss_percent_short&quot;: 0.015,<br>        &quot;take_profit_percent_short&quot;: 0.025,<br>        &quot;limit_short&quot;: 0.024,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.01,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.024,<br>        &quot;roi_at_100&quot;: 0.02,<br>        &quot;roi_at_150&quot;: 0.018,<br>        &quot;roi_at_200&quot;: 0.015,<br>        &quot;roi_at_300&quot;: 0.013,<br>        &quot;roi_at_500&quot;: 0.01<br>    },<br>    &quot;CRV/USDT:USDT&quot;: {<br>        &quot;Optimizer_used&quot;: &quot;1st backtest - Expectancy&quot;,<br>        &quot;model_name&quot;: &quot;transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras&quot;,<br>        &quot;Optimizer_result&quot;: &quot;For CRV/USDT:USDT backtest was done from 2024-03-02 11:45:00 upto 2024-05-14 03:45:00 for a duration of 72 days 16:00:00 using time frame of 15m with Win Rate % - 55.47, Return % - 144.875,Expectancy % - 0.24103 and Sharpe Ratio - 0.7808.&quot;,<br>        &quot;stop_loss_percent_long&quot;: 0.015,<br>        &quot;take_profit_percent_long&quot;: 0.025,<br>        &quot;limit_long&quot;: 0.024,<br>        &quot;stop_loss_percent_short&quot;: 0.015,<br>        &quot;take_profit_percent_short&quot;: 0.025,<br>        &quot;limit_short&quot;: 0.024,<br>        &quot;margin_leverage&quot;: 1,<br>        &quot;TRAILING_ACTIVATE_PCT&quot;: 0.01,<br>        &quot;TRAILING_STOP_PCT&quot;: 0.005,<br>        &quot;roi_at_50&quot;: 0.024,<br>        &quot;roi_at_100&quot;: 0.02,<br>        &quot;roi_at_150&quot;: 0.018,<br>        &quot;roi_at_200&quot;: 0.015,<br>        &quot;roi_at_300&quot;: 0.013,<br>        &quot;roi_at_500&quot;: 0.01<br>    }<br>.................................... <br>(all 27 assets got short listed as per paramter given by us during <br>optimization and backtesting with downlaoded data for the neural <br>network model we trained our model on)<br><br>}</pre><p>The provided data snippet appears to be the results of backtesting a cryptocurrency trading strategy on multiple assets. Here’s a breakdown of the information:</p><p><strong>Structure:</strong></p><ul><li>It’s a dictionary with currency pairs (e.g., “MATIC/USDT:USDT”) as keys.</li></ul><p><strong>Content for Each Asset:</strong></p><ul><li><strong>Optimizer_used:</strong> This specifies the optimization method used for backtesting (here, “1st backtest — Expectancy”).</li><li><strong>model_name:</strong> This indicates the model name used for the trading strategy (“transformer_model_55sl_55tp_eth_15m_may_13th_ShRa_0.78.keras”).</li><li><strong>Optimizer_result:</strong> This is a detailed description of the backtesting results for the specific asset. It includes:</li><li>Start and end date of the backtest.</li><li>Backtesting duration.</li><li>Timeframe used (e.g., 15m).</li><li>Win Rate percentage.</li><li>Return percentage.</li><li>Expectancy percentage.</li><li>Sharpe Ratio.</li><li><strong>stop_loss_percent_long/short:</strong> These define the stop-loss percentages for long and short positions.</li><li><strong>take_profit_percent_long/short:</strong> These define the take-profit percentages for long and short positions.</li><li><strong>limit_long/short:</strong> These define the maximum price deviation allowed for entry orders (likely to prevent excessive slippage).</li><li><strong>margin_leverage:</strong> This specifies the leverage used for margin trading (set to 1 here, indicating no leverage).</li><li><strong>TRAILING_ACTIVATE_PCT &amp; TRAILING_STOP_PCT:</strong> These define parameters for trailing stop-loss, which adjusts the stop-loss dynamically.</li><li><strong>roi_at_50, 100, 150, etc.:</strong> These are potentially profit targets at different holding durations (e.g., roi_at_50 might be the target profit for holding 50% of the time).</li></ul><p><strong>Interpretation:</strong></p><ul><li>This data likely comes from a backtesting tool that evaluated a specific trading strategy on various cryptocurrencies.</li><li>The results show performance metrics like win rate, return, and Sharpe Ratio for each asset.</li><li>Stop-loss, take-profit, and leverage parameters define the risk management aspects of the strategy.</li></ul><p><strong>Shortlisted Assets and Saving:</strong></p><ul><li>The statement mentions “shortlisted assets” but doesn’t explicitly show how they are identified. It’s possible that assets meeting certain performance criteria (based on the backtesting results) are considered shortlisted.</li><li>These shortlisted assets are potentially saved in a file named “saved_assets.txt” in the same format as the provided data snippet.</li></ul><p><strong>Disclaimer:</strong></p><ul><li>Backtesting results are not a guarantee of future performance.</li><li>Trading cryptocurrencies involves significant risks, and you should always do your own research before making any investment decisions.</li></ul><h3>Conclusion:</h3><p>This article describes a cryptocurrency trading system that utilizes a neural network model (specifically a Transformer model) and a trading bot called VishvaAlgo. Here’s a breakdown:</p><p><strong>Data and Model Training:</strong></p><ul><li>The system downloads historical data for over 250+ cryptocurrency assets on Binance Futures from TradingView.</li><li>It trains a Transformer-based neural network model, achieving a claimed return of 33,800%+ on Ethereum (ETHUSDT) in 3 years on a 15-minutes time frame data with over 100,000 rows trained model with 193+ features used for finding the best possible estimation for going neutral, long and short using the classification based neural network transformers model. (<strong>important to note: this returns vary from system to system based on trained data and needs re-verification</strong>).</li></ul><p><strong>Hyperparameter Optimization and Asset Selection:</strong></p><ul><li>The system uses Hyperopt (a hyperparameter optimization library) to identify the most suitable assets for the trained model among the downloaded data.</li><li>Each shortlisted asset has a unique set of parameters like stop-loss, take-profit, leverage, tailored for the model’s predictions.</li></ul><p><strong>VishvaAlgo — The Trading Bot:</strong></p><ul><li>VishvaAlgo helps automate live trading using the trained model and the shortlisted assets with their pre-defined parameters.</li><li>The bot offers easy integration with various neural network models for classification.</li><li>A video explaining VishvaAlgo’s features and benefits is available <strong><em>— </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></li></ul><p><strong>Benefits of VishvaAlgo:</strong></p><ul><li>Automates trading based on the trained model and optimized asset selection.</li><li>Offers easy integration with user-defined neural network models.</li><li>Provided detailed explanation and installation guide for purchase through my Patreon page.</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FKWAvZraD5aM%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DKWAvZraD5aM&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FKWAvZraD5aM%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/fa4c736694b0d947204a89e359dce943/href">https://medium.com/media/fa4c736694b0d947204a89e359dce943/href</a></iframe><blockquote><strong>Youtube Link Explanation of VishvaAlgo v4.x Features<em> — </em></strong><a href="https://www.youtube.com/watch?v=KWAvZraD5aM"><strong><em>Link</em></strong></a></blockquote><blockquote>get entire code and profitable algos @ <a href="https://patreon.com/pppicasso?utm_medium=clipboard_copy&amp;utm_source=copyLink&amp;utm_campaign=creatorshare_creator&amp;utm_content=join_link">https://patreon.com/pppicasso</a></blockquote><p><strong><em>Disclaimer:</em></strong><em> Trading involves risk. Past performance is not indicative of future results. VishvaAlgo is a tool to assist traders and does not guarantee profits. Please trade responsibly and conduct thorough research before making investment decisions.</em></p><p>Warm Regards,</p><p><strong>Puranam Pradeep Picasso</strong></p><p><strong>Linkedin</strong> — <a href="https://www.linkedin.com/in/puranampradeeppicasso/">https://www.linkedin.com/in/puranampradeeppicasso/</a></p><p><strong>Patreon </strong>— <a href="https://patreon.com/pppicasso">https://patreon.com/pppicasso</a></p><p><strong>Facebook </strong>— <a href="https://www.facebook.com/puranam.p.picasso/">https://www.facebook.com/puranam.p.picasso/</a></p><p><strong>Twitter</strong> — <a href="https://twitter.com/picasso_999">https://twitter.com/picasso_999</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=49d0fb7ab78b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[VishvaAlgo v3.0]]></title>
            <link>https://imbuedeskpicasso.medium.com/vishvaalgo-v3-0-f4ca0facae7e?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/f4ca0facae7e</guid>
            <category><![CDATA[cryptocurrency]]></category>
            <category><![CDATA[algorithmic-trading]]></category>
            <category><![CDATA[trading-bot]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[neural-networks]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Tue, 02 Apr 2024 05:59:15 GMT</pubDate>
            <atom:updated>2024-04-02T06:17:13.511Z</atom:updated>
            <content:encoded><![CDATA[<h3>VishvaAlgo v3.0 — Revolutionize Your Live Cryptocurrency Trading system Enhanced with Machine Learning (Neural Network) Model. Live Profits Screenshots Shared</h3><h3><em>Introduction:</em></h3><p>Are you tired of manually managing your trades and constantly worrying about market fluctuations? Say goodbye to the stress and welcome VishvaAlgo v3.0, the ultimate trading solution designed to revolutionize your trading experience. With advanced features and cutting-edge technology, VishvaAlgo empowers traders to maximize profitability while effectively managing risks. Let’s explore the latest enhancements and how VishvaAlgo sets itself apart from other bots in the market.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7f3vYYnu1nSNA6MhXUUb7g.png" /></figure><h3>Previously Developed Algorithm based on same concept for Neural Networks:</h3><h4>Using Neural Networks Models:</h4><blockquote><em>Bitcoin/BTC 4750%+ , Etherium/ETH 11,270%+ profit in 1023 days using Neural Networks, Algorithmic Trading Vs/+ Machine Learning Models Vs/+ Deep Learning Model Part — 4 (TCN, LSTM, Transformer with Ensemble Method) — </em><a href="https://imbuedeskpicasso.medium.com/bitcoin-btc-4750-etherium-eth-11-270-profit-in-1023-days-using-neural-networks-algorithmic-d5a644cdc36f"><em>Link</em></a></blockquote><p>This above article covers <strong><em>ensemble method combining TCN and LSTM neural network models</em></strong> has demonstrated exceptional performance across various datasets, outperforming individual models and even surpassing buy and hold strategies. This underscores the effectiveness of ensemble learning in improving prediction accuracy and robustness.</p><h4>Using Machine learning (Boosting) Models:</h4><blockquote><em>From 54% to a Staggering 4648%: Catapulting Cryptocurrency Trading with CatBoost Classifier, Machine Learning Model at Its Best — </em><a href="https://imbuedeskpicasso.medium.com/from-54-to-a-staggering-4648-catapulting-cryptocurrency-trading-with-catboost-classifier-75ac9f10c8fc"><em>Link</em></a></blockquote><p>This above article covers the <strong><em>ensemble method, particularly employing classifiers such as Random Forest, Gradient Boosting, and CatBoost,</em></strong> has demonstrated exceptional returns with a stop loss of <strong><em>10% and take profit of 2.5%</em></strong>, yielding an impressive return of<strong><em> 4648% over 1022 days,</em></strong> while maintaining a <strong><em>high winning rate of 81.17%.</em></strong> CatBoost’s stellar performance within the ensemble can be attributed to its effective handling of categorical features, robustness to noise, and automatic management of missing values, suggesting its adaptability across various market conditions.</p><h4>Previous Article about the product VishvaAlgo v.2.0, V2.2 and Its Features:</h4><blockquote><em>VishvaAlgo V 2.0, a Live Cryptocurrency Trading system Enhanced with Machine Learning (Neural Network) model for Live Trading — </em><a href="https://medium.com/@imbuedeskpicasso/version-2-0-3ce4a81d3e18"><em>Link</em></a></blockquote><p>In addition to the latest updates mentioned earlier in above article, VishvaAlgo incorporates several additional features and enhancements to elevate your trading experience:</p><p><strong>Functionality to Remove TP and SL Positions:</strong> Take profit (TP) and stop loss (SL) open positions are now automatically removed if the symbol is closed or not in an open position, ensuring optimal trade management and risk mitigation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iyxATiaQttSrMhWy-st_Hw.png" /><figcaption>Automatic setup and deletion of stoploss and takeprofit orders for open positions</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TvfYbbWkbtj2q_uD1ifSVg.png" /><figcaption>A simple function to shutdown all open positions and orders. Apart from automatic cancellation, we can close all trades through one click</figcaption></figure><p>S<strong>ymbol Cooldown Feature:</strong> VishvaAlgo introduces a symbol cooldown feature, imposing a one-hour cooldown period after exiting from an open position. This cooldown period enhances trade stability and reduces the risk of overtrading.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*C9_1GNu6tIOIU-o2lhIbmg.png" /><figcaption>Blocks symbol as per the time interval we set</figcaption></figure><p><strong>Maximum Number of Trades Limit: </strong>Users now have the capability to limit the maximum number of trades (max_trades), allowing for better control over trading activity and risk exposure. Additionally, a toggle for custom_assets enables seamless customization of trading parameters for enhanced flexibility.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*L4JG_Kr48Mm3oB4nQLHcnw.png" /><figcaption>Whenever Max Trades are reached, it will show on logs</figcaption></figure><p><strong>Enhanced Update Process for Open Positions: </strong>The update process for open_positions has been enhanced to ensure accurate and timely updates once an asset is closed. This improvement streamlines trade management and tracking, providing users with up-to-date insights into their trading activities.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*L4JG_Kr48Mm3oB4nQLHcnw.png" /><figcaption>based on the open_positions shown up here, necessary actions the bot will take further to open any new limit orders or close orders and positon as needed</figcaption></figure><p><strong>Trade Data Saving and Analysis: </strong>VishvaAlgo v3.0 now features the ability to save all trades for all coins into a .txt file for future analysis. The introduction of an Analyze class enables comprehensive analysis of all saved_positions.txt trades, offering users an overview of their overall profit and loss summary.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ju5Oy6mqqZfN7MbD8I_jbg.png" /><figcaption>All trades that bot has taken, gets saved in file and later retrived for analysis of PnL</figcaption></figure><p><strong>Individual and Combined Trade Performance Tracking: </strong>Users can now view and save individual performance and combined total performance of trades for comprehensive analysis and tracking. This feature provides valuable insights into the effectiveness of trading strategies and enables informed decision-making.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/429/1*qt5UVZ-CAhuxXr6onaOmLw.png" /><figcaption>Overall trades bot has taken and also every individual asset performance also can be tracked</figcaption></figure><p>With these additional features and enhancements, VishvaAlgo v3.0 offers unparalleled functionality, reliability, and performance for traders of all levels. Don’t miss out on the opportunity to revolutionize your trading journey with VishvaAlgo v3.0.</p><blockquote><strong><em>Purchase Link:</em></strong><em> </em><a href="https://www.patreon.com/pppicasso/shop/vishvaalgo-v3-0-live-crypto-trading-170240?source=storefront">VishvaAlgo V3.0 Live Crypto Trading Using Machine Learning Model</a></blockquote><h3><em>Latest Updates for VishvaAlgo v3.0 Using Neural Network and Machine Learning model for Live Cryptocurrency Trading on Multiple Assets at once with Customization of Trading Setup for Each Individual Asset:</em></h3><h4>VishvaAlgo v3.0 introduces several groundbreaking updates tailored to meet the diverse needs of traders:</h4><p><strong>Volume-Based Custom Assets Segregation: </strong>Custom assets are now categorized based on volume, allowing for efficient trading and optimal asset selection.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cy28T6vJvaQPAgA8hZXJ3A.png" /><figcaption>custom_assets get sorted automatically based on 24 hours volume and gets updated every 1 hour</figcaption></figure><p><strong>ROI-Based Profit Targeting: </strong>The bot automatically closes trades when predefined profit percentage thresholds are reached within specified time intervals, ensuring maximum profitability while minimizing risk.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hqEFuNw4uzn1oXFpIR40Yw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*h7pvH2KIDFBUrmkjj1HQpA.png" /><figcaption>roi is set individually for each asset in specific</figcaption></figure><p><strong>Trailing Stop Loss: </strong>Enjoy peace of mind with automatic adjustment of stop loss trigger prices and activation, enabling traders to secure profits and minimize losses.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Om7bG2WT6ai1PMNqR5Q6QQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Hm-Xa4V6jqlN49VIzhvkdQ.png" /><figcaption>every asset has its own unique leverage, TRAILING_STOP_PCT, roi’s, stop loss, take profit defined separately</figcaption></figure><p><strong>Improved Error Handling: </strong>Enhanced error handling mechanisms ensure smoother operation and reliability, providing a seamless trading experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/878/1*3XO7HdnG51UyTn1QEchagQ.png" /><figcaption>All methods, functions, classes are built within try, except handlers to handle error and also using threading so that bot doesn’t stop because of any error occurring in the code during processing</figcaption></figure><p><strong>Rate Limit Handling: </strong>VishvaAlgo intelligently manages rate limits while fetching trade data from APIs, optimizing data retrieval and processing efficiency.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/588/1*44umXngacPGnjjixcsBOCQ.png" /><figcaption>used backoff to slow down if rate limit is reached</figcaption></figure><p><strong>Flexible Feature Activation: </strong>Users have the flexibility to toggle between activating ROI targeting and trailing stop loss features based on their trading preferences and market conditions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pWgE4yYpomqLieswF236Vg.png" /><figcaption>roi_enabled, trailing_enabled, custom_assets_enabled, unlimited_trades these all can be enabled or disabled as user needs</figcaption></figure><p><strong>Custom Object Support for TCN Model:</strong> Integration of TCN neural network model support expands predictive accuracy and analysis capabilities.</p><p>Building own custom metrics and running own model with dataset and find best suitable model to use for trading, we have given over 10 models to hyperopt and train.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4ep9pqaifPwxHsern6K1XQ.png" /><figcaption>hyperoptimzation of TCN Neural Network model with custom f1 metric added</figcaption></figure><p><strong>Model Identification in Custom Assets:</strong> Model names are now included in the custom_assets.txt file for improved clarity and ease of reference, streamlining asset management and configuration.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*amGWXSlhcogQrvUV9t0mOA.png" /><figcaption>each asset gets saved automatically by the model it got hyperoptimized by along with best suitable trading setup</figcaption></figure><h4><em>Profitability Showcase:</em></h4><p>I have entered with 10 USDT per trade but you can check that the bot has been making huge profits on each trade automatically for last 20+ hours with around 30+ trades taken and only 1 loss out of all.</p><h4>Here are some screenshots showcasing the impressive profits generated by VishvaAlgo v3.0:</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TRShuaJvFyUt-7UTJo_awQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2tomIG5xUOnJ1xQKTNtRZg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qsS4Um-CfWphSaZ0F3hrcA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*h_7Nj1mL6_a16lPoI1Xs9Q.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CwX2qkB2O_nWxgWlsi-ZEA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lY-8QOx_L7Mgr6WufuBTLg.png" /></figure><blockquote><strong><em>Purchase Link:</em></strong><em> </em><a href="https://www.patreon.com/pppicasso/shop/vishvaalgo-v3-0-live-crypto-trading-170240?source=storefront">VishvaAlgo V3.0 Live Crypto Trading Using Machine Learning Model</a></blockquote><h3><em>VishvaAlgo v3.0 Advancements in Risk Management:</em></h3><p>VishvaAlgo v3.0 leads the market in risk management capabilities. Unlike traditional bots that offer common risk management settings for all assets, VishvaAlgo empowers traders with:</p><ul><li>Individualized Stop Loss, Take Profit, Trailing Stop, ROI targeting, and Leverage settings for each asset.</li><li>Dynamic ROI-based profit targeting ensures profits are locked in at optimal levels.</li><li>Trailing Stop Loss functionality protects profits while allowing for potential upside.</li><li>Seamless integration of ML and neural network models for predictive analysis and fine-tuning trading strategies.</li></ul><h3><em>Conclusion:</em></h3><p>Experience the future of trading with VishvaAlgo v3.0. With its advanced features, unparalleled risk management capabilities, and ease of integration of ML and neural network models, VishvaAlgo is the ultimate choice for traders seeking consistent profits and peace of mind. Don’t miss out on this opportunity to revolutionize your trading journey.</p><blockquote><strong><em>Purchase Link:</em></strong><em> </em><a href="https://www.patreon.com/pppicasso/shop/vishvaalgo-v3-0-live-crypto-trading-170240?source=storefront">VishvaAlgo V3.0 Live Crypto Trading Using Machine Learning Model</a></blockquote><blockquote>Experience the future of trading with <a href="https://www.patreon.com/pppicasso/shop/vishvaalgo-v3-0-live-crypto-trading-170240?source=storefront">VishvaAlgo v3.0 </a>and unlock new possibilities in the world of cryptocurrency trading.</blockquote><p><em>Disclaimer: Trading involves risk. Past performance is not indicative of future results. VishvaAlgo is a tool to assist traders and does not guarantee profits. Please trade responsibly and conduct thorough research before making investment decisions.</em></p><p>Warm Regards,</p><p><strong>Puranam Pradeep Picasso</strong></p><p><strong>Linkedin</strong> — <a href="https://www.linkedin.com/in/puranampradeeppicasso/">https://www.linkedin.com/in/puranampradeeppicasso/</a></p><p><strong>Patreon </strong>— <a href="https://patreon.com/pppicasso">https://patreon.com/pppicasso</a></p><p><strong>Facebook </strong>— <a href="https://www.facebook.com/puranam.p.picasso/">https://www.facebook.com/puranam.p.picasso/</a></p><p><strong>Twitter</strong> — <a href="https://twitter.com/picasso_999">https://twitter.com/picasso_999</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f4ca0facae7e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[VishvaAlgo V 2.0,]]></title>
            <link>https://imbuedeskpicasso.medium.com/version-2-0-3ce4a81d3e18?source=rss-f3467d786018------2</link>
            <guid isPermaLink="false">https://medium.com/p/3ce4a81d3e18</guid>
            <category><![CDATA[trading-bot]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[binance]]></category>
            <category><![CDATA[neural-networks]]></category>
            <category><![CDATA[algorithmic-trading]]></category>
            <dc:creator><![CDATA[Puranam Pradeep Picasso - ImbueDesk Profile]]></dc:creator>
            <pubDate>Mon, 25 Mar 2024 18:12:56 GMT</pubDate>
            <atom:updated>2024-03-25T19:32:56.682Z</atom:updated>
            <content:encoded><![CDATA[<h3>VishvaAlgo V 2.0, a Live Cryptocurrency Trading system Enhanced with Machine Learning (Neural Network) model for Live Trading</h3><h3>Introduction:</h3><p>In the ever-evolving landscape of algorithmic trading, where milliseconds can make or break fortunes, the quest for the ultimate trading bot has reached new heights. Traditional bots often grapple with limitations, such as employing common leverage or fixed stop-loss and take-profit levels across multiple assets, or being confined to trading a single asset at a time, missing out on lucrative opportunities elsewhere in the market. Moreover, the complexity of algorithmic trading in the cryptocurrency sphere presents unique challenges and opportunities, characterized by unparalleled volatility and 24/7 trading cycles.</p><p>Enter a game-changer: a Python-powered trading bot designed to transcend these constraints and redefine the possibilities of automated trading. This innovative solution leverages cutting-edge technologies, including machine learning and neural networks, to empower traders with unprecedented flexibility, adaptability, and predictive accuracy. With a focus on customization, optimization, and real-time decision-making, this bot represents a paradigm shift in algorithmic trading, poised to revolutionize the way traders navigate the crypto markets.</p><h3>Previously Developed Algorithm based on same concept for Neural Networks:</h3><h4>Using Neural Networks Models:</h4><blockquote><strong>Bitcoin/BTC 4750%+ , Etherium/ETH 11,270%+ profit in 1023 days using Neural Networks, Algorithmic Trading Vs/+ Machine Learning Models Vs/+ Deep Learning Model Part — 4 (TCN, LSTM, Transformer with Ensemble Method) </strong>— <a href="https://imbuedeskpicasso.medium.com/bitcoin-btc-4750-etherium-eth-11-270-profit-in-1023-days-using-neural-networks-algorithmic-d5a644cdc36f">Link</a></blockquote><p>This above article covers <strong><em>ensemble method combining TCN and LSTM neural network models</em></strong> has demonstrated exceptional performance across various datasets, outperforming individual models and even surpassing buy and hold strategies. This underscores the effectiveness of ensemble learning in improving prediction accuracy and robustness.</p><h4>Using Machine learning (Boosting) Models:</h4><blockquote><strong>From 54% to a Staggering 4648%: Catapulting Cryptocurrency Trading with CatBoost Classifier, Machine Learning Model at Its Best </strong>— <a href="https://imbuedeskpicasso.medium.com/from-54-to-a-staggering-4648-catapulting-cryptocurrency-trading-with-catboost-classifier-75ac9f10c8fc">Link</a></blockquote><p>This above article covers the <strong><em>ensemble method, particularly employing classifiers such as Random Forest, Gradient Boosting, and CatBoost,</em></strong> has demonstrated exceptional returns with a stop loss of <strong><em>10% and take profit of 2.5%</em></strong>, yielding an impressive return of<strong><em> 4648% over 1022 days,</em></strong> while maintaining a <strong><em>high winning rate of 81.17%.</em></strong> CatBoost’s stellar performance within the ensemble can be attributed to its effective handling of categorical features, robustness to noise, and automatic management of missing values, suggesting its adaptability across various market conditions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7f3vYYnu1nSNA6MhXUUb7g.png" /></figure><h3>Key Features Of New Live Trading Bot with Neural Network Easy Integration:</h3><h3>Flexibility and Customization:</h3><p>Unlike conventional bots, which impose uniform parameters across assets, our bot offers unparalleled flexibility. Users can tailor settings such as leverage, stop-loss, and take-profit levels on a per-asset basis, allowing for precise risk management and optimization of trading strategies.</p><h3>Multi-Asset Trading:</h3><p>By enabling simultaneous trading across multiple assets, our bot capitalizes on diverse market opportunities, maximizing profit potential and mitigating risk. Gone are the days of overlooking promising assets due to the limitations of single-asset bots.</p><h3>Real-Time Decision-Making:</h3><p>In the fast-paced world of cryptocurrency trading, timing is everything. Our bot employs advanced algorithms and real-time data analysis to identify optimal entry and exit points, enabling swift and decisive action in response to market fluctuations.</p><h3>Futures Trading and Predictive Insights:</h3><p>Embracing the dynamic nature of futures trading, our bot goes beyond mere execution to offer predictive insights, guiding users on when to go long, short, or remain neutral. By anticipating market movements with precision, our bot empowers traders to stay ahead of the curve and capitalize on emerging opportunities.</p><h3>Machine Learning Integration:</h3><p>Harnessing the power of machine learning, our bot transcends traditional trading strategies by adapting to evolving market conditions and uncovering hidden patterns within complex data sets. With minimal adjustments, users can effortlessly integrate classification models, which excel in real-time prediction scenarios, outperforming regression models reliant on continuous data.</p><pre>import time<br>import ccxt<br>from keras.models import save_model, load_model<br>import numpy as np<br>import pandas as pd<br>import talib as ta<br>from sklearn.preprocessing import MinMaxScaler<br>import warnings<br>from threading import Thread, Event<br>import decimal<br>import joblib<br># from pandas.core.computation import PerformanceWarning<br><br># Suppress PerformanceWarning<br>warnings.filterwarnings(&quot;ignore&quot;)<br># Load your pre-trained model, keras trained model will only take load_model from keras.models and not from joblib<br>model = load_model(&#39;./best_model_lstm_1tp_1sl_2p5SlTp_success.h5&#39;)<br>exchange = ccxt.binanceusdm(<br>    {<br>        &#39;enableRateLimit&#39;: True,  # required by the Manual<br>        # Add any other authentication parameters if needed<br>    }<br>    )<br># exchange<br># NOTE: I used https://testnet.binancefuture.com/en/futures/BTCUSDT for testnet API (this has very bad liquidity issue for various assets and many other issues but can be used for purely testiug purpose)<br>#  kraken testnet creds pubkey - K9dS2SK8JURMl9F30guUhOS/ao3HM+tfNqRMgJGed+JhDfpJhvsC/y           privatekey - /J/0kQ3PPyPwsrPsKZYtLqOQNPLKZJattT6i15Bpg14/6ALokHHY/MBb1p6tYKyFgkKXIJIOMbBsRfL3aBZUvQ1<br>api_key = &#39;8f7080f8821b58a53f5c49f0041413d8cbff7447e2a2afdcce1cca9c9154ea&#39;<br>secret_key = &#39;1e58391a46a7dbb098aa512871d497c0052a9174e163e69e3a6660ba8c38f&#39;<br>exchange.apiKey = api_key<br>exchange.secret = secret_key<br>exchange.set_sandbox_mode(True)<br>###################################################################################<br># if u want to go live, un commenb below 5 lines and comment 5 lines above and change to your own api_key and secret_key (below one ius a dummy and also make sure to give &quot;futres&quot; permission while creating your api in the exchange)<br># api_key = &#39;NAvJMmGIZY89mukwKhacKohlYmmK9BPH2LlRz8qVehdvx8lIDBvikBza&#39;<br># secret_key = &#39;7j2MoKxQEsGS01KuSQtNwWCj6dHc5UnArkGGm3kxAO63tHWZq0NyRgIaz&#39;<br># exchange.apiKey = api_key<br># exchange.secret = secret_key<br># exchange.set_sandbox_mode(False)<br>#######################################################################################<br>    # exchange.set_sandbox_mode(True)<br>exchange.has<br>exchange.fetchBalance()[&quot;info&quot;][&quot;assets&quot;]<br>exchange.options = {&#39;defaultType&#39;: &#39;future&#39;, # or &#39;margin&#39; or &#39;spot&#39;<br>                    &#39;timeDifference&#39;: 0,  # Set an appropriate initial value for time difference<br>                        &#39;adjustForTimeDifference&#39;: True,<br>                        &#39;newOrderRespType&#39;: &#39;FULL&#39;,<br>                        &#39;defaultTimeInForce&#39;: &#39;GTC&#39;}</pre><h4>Running the Trading Bot with Few Lines of Code:</h4><pre># NOTE: <br># `symbol` - you can mention any` symbol` here, but the bot will fetch symbol based on volume filter we used , this will only help in initating the bot and will not take actual symbol mentioned here<br>#  `amount` - mention a default amount/quantity (ccxt strangely accepts quantity instead of USDT , will try to see a work-around technique to enter with USDT per trade rather than quantity in future updates, as of now just mention 0.001)<br># `initial_timeframe` - set 1m as default, it will help in starting the whole thread faster within 1-2 candel lengths (1-2 mintues), if you use higher timeframe, it will take minimum 2 candles length to start the bot, so fix to using 1m<br>#  `actual_timeframe` - mention the actual timeframe that your prediction model ot trained for , here , our LSTM neural network model got trained with `15m` timeframe and so we mentioned that.<br>#  `leverage` - set leverage based on risk your bot can take, our model got trained at 1x leverage and so I used low risk of 2x leverage or can use 1x for best results possible. Further in future updates, we will train bots on hgher leverages and use this feature more profound way<br>#  `sandbox` - set to True if using testnet, (binance testnet sucks, it doesnt work well at all but for testing purpose we can use it). In future updates, we will see if we can use dry-run in more better ways without depending on WRONG liqudity availability in testnets like binance testnet. I want to understand dry-run concept from freqtrade and implement something similar for our bot too in future updates.<br>#  `stoploss_percent` &amp; `takeprofit_percent` - set your stoploss accordingly, the present model had 10% stoploss which is 0.1 and take profit set at 0.025 , which is around 2.5% , further in future, we will work on good risk-to-reward ratios.<br>#  `number_of_assets_to_trade` - please set functional int number based on model which got trained and after testing on various assets through backtest, only then use higher number. Future updates will have defined set of assets defined by user <br># `custom_assets_enabled` by default is set to False, but if we want to use our own custom assets, then, this needs to be enabled.<br># `custom_assets` - this takes dictionary values of asset name and corresponding key, value pairs of tp,sl,leverage and other things, you can set values as needed or use process_json(file_path) function to automate the short list of best perfoming assets with unique sl, tp and leverages after hyper tuning them and saving them under custom-assets.txt file, which an be fetch and used as required.<br># `usdt_per_trade` - mention USDT you want to trade each asset with.<br><br>trader =CCXTFuturesTrader(symbol=&quot;BTC/USDT:USDT&quot;, initial_timeframe=&#39;1m&#39;, amount=0.01, <br>                          exchange=exchange, actual_timeframe = &#39;15m&#39;, leverage=2, sandbox=True, stoploss_percent = 0.025, takeprofit_percent = 0.025, <br>                          number_of_assets_to_trade = 5, custom_assets_enabled=True, custom_assets=custom_assets, usdt_per_trade=25)<br><br>trader.start_trading(start = None, hist_bars = 200)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/0*zZfeVfVG_ugXfBt5" /><figcaption>Bot entering a Trade with amount, units, capital , stoploss, takeprofit being mentioned.</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DcqvTx_bwV7B1ZRaT2Zibg.png" /><figcaption>Entering Trades automatically on Binance Testnet live</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uNUjXjNIRtQfUFt0w6mXow.png" /><figcaption>Take_Profit and Stop_Loss auto apply on Binance testnet exchange through local code</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QEXZb2KGBndQHxQfVZyDRg.png" /><figcaption>Handling errors asynchronously and passing on to next asset without bot getting stopped</figcaption></figure><h3>Saving Data:</h3><p>Saving data of multiple cryptocurrencies from TradingView to local system to do backtesting and hyper-optimization on the machine learning models</p><pre># !pip install --upgrade --no-cache-dir git+https://github.com/rongardF/tvdatafeed.git<br><br>import os<br>import json<br>import asyncio<br>from datetime import datetime, timedelta<br>import pandas as pd<br>from tvDatafeed import TvDatafeed, Interval<br># Initialize TvDatafeed object<br># username = &#39;YourTradingViewUsername&#39;<br># password = &#39;YourTradingViewPassword&#39;<br># tv = TvDatafeed(username, password)<br>tv = TvDatafeed()<br># List of symbols<br>data = [<br>    &quot;BTCUSDT.P&quot;, &quot;GMTUSDT.P&quot;, &quot;ETHUSDT.P&quot;, &quot;MTLUSDT.P&quot;, &quot;NEARUSDT.P&quot;, &quot;SOLUSDT.P&quot;, &quot;OGNUSDT.P&quot;, &quot;ZILUSDT.P&quot;, &quot;APEUSDT.P&quot;, &quot;XRPUSDT.P&quot;, &quot;ADAUSDT.P&quot;, &quot;AVAXUSDT.P&quot;, &quot;KNCUSDT.P&quot;, &quot;DOGEUSDT.P&quot;, &quot;WAVESUSDT.P&quot;, &quot;1000SHIBUSDT.P&quot;, &quot;FTMUSDT.P&quot;, &quot;BNBUSDT.P&quot;, &quot;XMRUSDT.P&quot;, &quot;DOTUSDT.P&quot;, &quot;GALAUSDT.P&quot;, &quot;MATICUSDT.P&quot;, &quot;LRCUSDT.P&quot;, &quot;RUNEUSDT.P&quot;, &quot;AUDIOUSDT.P&quot;, &quot;FILUSDT.P&quot;, &quot;ETCUSDT.P&quot;, &quot;EOSUSDT.P&quot;, &quot;ZECUSDT.P&quot;, &quot;AXSUSDT.P&quot;, &quot;LTCUSDT.P&quot;, &quot;SANDUSDT.P&quot;, &quot;LINKUSDT.P&quot;, &quot;SXPUSDT.P&quot;, &quot;ATOMUSDT.P&quot;, &quot;BCHUSDT.P&quot;, &quot;PEOPLEUSDT.P&quot;, &quot;MANAUSDT.P&quot;, &quot;AAVEUSDT.P&quot;, &quot;ALICEUSDT.P&quot;, &quot;BNXUSDT.P&quot;, &quot;KAVAUSDT.P&quot;, &quot;CRVUSDT.P&quot;, &quot;ONEUSDT.P&quot;, &quot;VETUSDT.P&quot;, &quot;THETAUSDT.P&quot;, &quot;DYDXUSDT.P&quot;, &quot;ICPUSDT.P&quot;, &quot;ALGOUSDT.P&quot;, &quot;SUSHIUSDT.P&quot;, &quot;RENUSDT.P&quot;, &quot;COMPUSDT.P&quot;, &quot;XLMUSDT.P&quot;, &quot;CHZUSDT.P&quot;, &quot;TLMUSDT.P&quot;, &quot;TRXUSDT.P&quot;, &quot;XTZUSDT.P&quot;, &quot;FTTUSDT.P&quot;, &quot;IMXUSDT.P&quot;, &quot;CELRUSDT.P&quot;, &quot;WOOUSDT.P&quot;, &quot;HNTUSDT.P&quot;, &quot;EGLDUSDT.P&quot;, &quot;ENJUSDT.P&quot;, &quot;CELOUSDT.P&quot;, &quot;BATUSDT.P&quot;, &quot;KSMUSDT.P&quot;, &quot;UNIUSDT.P&quot;, &quot;ROSEUSDT.P&quot;, &quot;BAKEUSDT.P&quot;, &quot;RSRUSDT.P&quot;, &quot;IOSTUSDT.P&quot;, &quot;GRTUSDT.P&quot;, &quot;DASHUSDT.P&quot;, &quot;ALPHAUSDT.P&quot;, &quot;FLOWUSDT.P&quot;, &quot;OCEANUSDT.P&quot;, &quot;DENTUSDT.P&quot;, &quot;CHRUSDT.P&quot;, &quot;OMGUSDT.P&quot;, &quot;HOTUSDT.P&quot;, &quot;LINAUSDT.P&quot;, &quot;SRMUSDT.P&quot;, &quot;COTIUSDT.P&quot;, &quot;SKLUSDT.P&quot;, &quot;NEOUSDT.P&quot;, &quot;SNXUSDT.P&quot;, &quot;ICXUSDT.P&quot;, &quot;ARUSDT.P&quot;, &quot;1INCHUSDT.P&quot;, &quot;API3USDT.P&quot;, &quot;ANKRUSDT.P&quot;, &quot;DUSKUSDT.P&quot;, &quot;REEFUSDT.P&quot;, &quot;BALUSDT.P&quot;, &quot;BANDUSDT.P&quot;, &quot;ZRXUSDT.P&quot;, &quot;C98USDT.P&quot;, &quot;QTUMUSDT.P&quot;, &quot;STORJUSDT.P&quot;, &quot;IOTAUSDT.P&quot;, &quot;ONTUSDT.P&quot;, &quot;MASKUSDT.P&quot;, &quot;GTCUSDT.P&quot;, &quot;HBARUSDT.P&quot;, &quot;MKRUSDT.P&quot;, &quot;TOMOUSDT.P&quot;, &quot;ENSUSDT.P&quot;, &quot;ZENUSDT.P&quot;, &quot;SFPUSDT.P&quot;, &quot;CVCUSDT.P&quot;, &quot;IOTXUSDT.P&quot;, &quot;CTKUSDT.P&quot;, &quot;FLMUSDT.P&quot;, &quot;NKNUSDT.P&quot;, &quot;YFIUSDT.P&quot;, &quot;RLCUSDT.P&quot;, &quot;BTSUSDT.P&quot;, &quot;KLAYUSDT.P&quot;, &quot;BELUSDT.P&quot;, &quot;XEMUSDT.P&quot;, &quot;ANTUSDT.P&quot;, &quot;SCUSDT.P&quot;, &quot;LITUSDT.P&quot;, &quot;CTSIUSDT.P&quot;, &quot;STMXUSDT.P&quot;, &quot;UNFIUSDT.P&quot;, &quot;RVNUSDT.P&quot;, &quot;1000XECUSDT.P&quot;, &quot;RAYUSDT.P&quot;, &quot;BLZUSDT.P&quot;, &quot;ATAUSDT.P&quot;, &quot;ARPAUSDT.P&quot;, &quot;DGBUSDT.P&quot;, &quot;LPTUSDT.P&quot;, &quot;TRBUSDT.P&quot;, &quot;OPUSDT.P&quot;, &quot;GALUSDT.P&quot;<br>]<br>nest_asyncio.apply()<br># Define data download function<br>async def download_data(symbol):<br>    try:<br>        data = tv.get_hist(symbol=symbol, exchange=&#39;BINANCE&#39;, interval=Interval.in_15_minute, n_bars=20000, extended_session=True)<br>        if not data.empty:<br>            # Convert Date objects to strings<br>            # data[&#39;Date&#39;] = data.index.date.astype(str)<br>            # data[&#39;Time&#39;] = data.index.time.astype(str)<br>            data[&#39;date&#39;] = data.index.astype(str)  # Add a new column for timestamps<br>            folder_name = &quot;tradingview_crypto_assets_15m&quot;<br>            os.makedirs(folder_name, exist_ok=True)<br>            # Replace &quot;USDT.P&quot; with &quot;/USDT:USDT&quot; in the file name<br>            symbol_file_name = symbol.replace(&quot;USDT.P&quot;, &quot;&quot;) + &quot;.json&quot;<br>            file_name = os.path.join(folder_name, symbol_file_name)<br>            # Convert DataFrame to dictionary<br>            data_dict = data.to_dict(orient=&#39;records&#39;)<br>            with open(file_name, &quot;w&quot;) as file:<br>                # Serialize dictionary to JSON<br>                json.dump(data_dict, file)<br>            print(f&quot;Data for {symbol} downloaded and saved successfully.&quot;)<br>        else:<br>            print(f&quot;No data available for {symbol}.&quot;)<br>    except Exception as e:<br>        print(f&quot;Error occurred while downloading data for {symbol}: {e}&quot;)<br># Define main function to run async download tasks<br>async def main():<br>    tasks = [download_data(symbol) for symbol in data]<br>    await asyncio.gather(*tasks)<br># Run the main function<br>asyncio.run(main())</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/0*0RW_sZqXlolyRBEQ" /><figcaption>Multiple cryptocurrency assets data being downloaded to local system with one code execution</figcaption></figure><h4>Backtesting Strategy with below function which has Specific Stop Loss, Take profit, leverage setup done:</h4><pre># Define MyCandlesStrat_3 class<br>    class MyCandlesStrat_3(Strategy):  <br>        sl_pct_l = best_params[&#39;BEST_STOP_LOSS_sl_pct_long&#39;] <br>        tp_pct_l = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_long&#39;] <br>        limit_l = best_params[&#39;BEST_LIMIT_ORDER_limit_long&#39;] <br>        sl_pct_s = best_params[&#39;BEST_STOP_LOSS_sl_pct_short&#39;] <br>        tp_pct_s = best_params[&#39;BEST_TAKE_PROFIT_tp_pct_short&#39;] <br>        limit_s = best_params[&#39;BEST_LIMIT_ORDER_limit_short&#39;] <br>        margin_leverage = best_params[&#39;BEST_LEVERAGE_margin_leverage&#39;]<br>        <br>        # sl_pct_l = 0.025<br>        # tp_pct_l = 0.025<br>        # limit_l = 0.024<br>        # sl_pct_s = 0.025<br>        # tp_pct_s = 0.025<br>        # limit_s = 0.024<br>        # margin_leverage = 2<br>        def init(self):<br>            super().init()<br>            self.signal1 = self.I(SIGNAL_3, self.data)<br>        def next(self):<br>            super().next() <br>            # if self.position:<br>            if (self.signal1 == 1):<br>                # sl_pct = 0.005  # 2% stop-loss<br>                # tp_pct = 0.005  # 5% take-profit<br>                sl_price = self.data.Close[-1] * (1 - (self.sl_pct_l * 0.001))<br>                tp_price = self.data.Close[-1] * (1 + (self.tp_pct_l * 0.001))<br>                limit_price = tp_price * 0.996<br>                self.position.is_long<br>                self.buy(sl=sl_price, limit=limit_price, tp=tp_price)<br>            elif (self.signal1 == 2):<br>                # sl_pct = 0.005  # 2% stop-loss<br>                # tp_pct = 0.005  # 5% take-profit<br>                sl_price = self.data.Close[-1] * (1 + (self.sl_pct_s * 0.001))<br>                tp_price = self.data.Close[-1] * (1 - (self.tp_pct_s * 0.001))<br>                limit_price = sl_price * 0.996<br>                self.position.is_short<br>                self.sell(sl=sl_price, limit=limit_price, tp=tp_price)<br>    # Run backtest<br>    bt_3 = Backtest(df_ens, MyCandlesStrat_3, cash=100000, commission=.001, margin= (1/MyCandlesStrat_3.margin_leverage), exclusive_orders=False)<br>    stat_3 = bt_3.run()<br>    print(&quot;backtest one done at 226 line - &quot;, stat_3)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*uLkjPLct4GMxyHHz" /><figcaption>Backtesting result of one crypto asset after hyper-optimization is done</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*RjnnZWj5A15OwBoZ" /><figcaption>Backtesting result of second crypto asset after hyper-optimization is done</figcaption></figure><h3>Optimizing The Strategy to Find Best Parameters for Trading:</h3><pre># Optimization<br>        def optimize_strategy():<br>            # Optimization Params<br>            optimizer = &#39;Sharpe Ratio&#39;<br>            stats = bt_3.optimize(<br>                sl_pct_l = range(6,100, 2), # (5,10,15,20,25,30,40,50,75,100)<br>                tp_pct_l =  range(6,100, 2), # (0.005, 0.01, 0.015, 0.02, 0.025, 0.03, 0.04, 0.05, 0.075, 0.1)<br>                # limit_l =  (4,9,14,19,24,29,39,49,74,90),#  (0.004, 0.009, 0.014, 0.019, 0.024, 0.029, 0.039, 0.049, 0.074, 0.09)<br>                sl_pct_s = range(6,100, 2),<br>                tp_pct_s =  range(6,100, 2),<br>                # limit_s =  (4,9,14,19,24,29,39,49,74,90),<br>                margin_leverage = range(1, 6),<br>                constraint=lambda p: ( (p.sl_pct_l &gt; (p.tp_pct_l + 4) ) and ((p.sl_pct_s) &gt; (p.tp_pct_s + 4) )),<br>                maximize = optimizer,<br>                return_optimization=True,<br>                method = &#39;skopt&#39;,<br>                max_tries = 100 # 20% for 0.2 and 100% for 1.0, this applys when not using &#39;skopt&#39; method, for &#39;skopt&#39; number starts from 1 to 200 max epochs <br>            )<br>            # Extract the optimization results<br>            best_params = {<br>                &#39;Optimizer&#39;: optimizer,<br>                &#39;OptimizerResult_Cross&#39;: stats[0][optimizer],<br>                &#39;BEST_STOP_LOSS_sl_pct_long&#39;: stats[1].x[0],<br>                &#39;BEST_TAKE_PROFIT_tp_pct_long&#39;: stats[1].x[1] ,<br>                &#39;BEST_LIMIT_ORDER_limit_long&#39;: stats[1].x[1] * 0.997,<br>                &#39;BEST_STOP_LOSS_sl_pct_short&#39;: stats[1].x[2] ,<br>                &#39;BEST_TAKE_PROFIT_tp_pct_short&#39;: stats[1].x[3] ,<br>                &#39;BEST_LIMIT_ORDER_limit_short&#39;: stats[1].x[3] * 0.997,<br>                &#39;BEST_LEVERAGE_margin_leverage&#39;: stats[1].x[4]<br>                <br>            }<br>            <br>            return best_params</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/0*jcJu2AfesOog8hwV" /><figcaption>Custom Assets list auto saved after running HyperOptimization on machine Learning lead backtesting.</figcaption></figure><p>Custom Assets list auto saved after running HyperOptimization on machine Learning lead backtesting.</p><h3>My Sales Pitch for the Live Crypto Trading bot with Easy Neural Networks Integration: (A Machine Learning Live Trading Bot)</h3><h4>🚀🔥 <strong>Unlock Your Crypto Trading Potential with Our Revolutionary Bot!</strong> 🔥🚀</h4><p>Calling all crypto enthusiasts, financial wizards, and machine learning mavens! 📈💰</p><p>Are you ready to elevate your trading game to new heights? Introducing our cutting-edge trading bot, meticulously crafted to cater to the needs of the modern trader. Say goodbye to manual guesswork and hello to the future of automated trading powered by Machine Learning and Neural Networks!</p><h4>🔥 <strong>Latest Updates — March 25th, 2024:</strong> 🔥</h4><p>✨ <strong>Custom Asset Integration:</strong> Now, tailor your trading experience with custom assets, complete with personalized settings for take profit, stop loss, and leverage. Trade with confidence, your way!</p><p>✨ <strong>Hyper-Tuned Asset Selection:</strong> Our bot does the heavy lifting for you, automatically identifying the best crypto assets for your ML/DL models through rigorous hyper-tuning and backtesting. Say hello to smarter trading decisions!</p><p>✨ <strong>Multi-Asset Trading:</strong> Trade multiple assets simultaneously with ease. Take full control of your portfolio and diversify like never before!</p><p>✨ <strong>USDT Value Trading:</strong> Introducing trading with specified USDT value for added flexibility and precision in your transactions.</p><p>✨ <strong>Enhanced Performance:</strong> With improved error handling, threading, multiprocessing, GPU activation, and CPU core utilization, experience lightning-fast performance and seamless execution.</p><p>✨ <strong>Neural Network Integration:</strong> Fine-tune your trading strategies with our Neural Network model, complete with preprocessing steps for optimal results.</p><p>✨ <strong>Data Access Simplified:</strong> Say goodbye to hassles! Access data from TradingView effortlessly, no login credentials required.</p><p>✨ <strong>Easy Setup:</strong> Follow our straightforward package installation instructions for a hassle-free setup process. Get started in minutes!</p><p>💡 <strong>Description:</strong> Our trading bot revolutionizes the way you trade cryptocurrencies. Train, test, backtest, and deploy ML/DL models effortlessly across various assets and time frames. Customize to your heart’s content and unleash its power for live or paper trading. Experience seamless execution, comprehensive functionality, and detailed trading reports across multiple exchanges.</p><h4>🌟 <strong>Take the First Step Towards Trading Success:</strong> 🌟</h4><p>Unlock the full potential of your trading journey with our innovative bot. Hyper-optimize your models, run extensive tests, and dive into the world of automated trading with confidence. Need assistance or have suggestions? We’re here for you every step of the way.</p><p>🚀 Don’t miss out on this opportunity to supercharge your trading strategies and embark on a journey to financial freedom! 🚀</p><blockquote>👉 Click here to seize the opportunity:</blockquote><blockquote><a href="https://www.patreon.com/pppicasso/shop/vishvaalgo-v2-0-live-crypto-trading-156496">VishvaAlgo_V2.0 — Product Link</a> 👈</blockquote><p>Here’s to your success in the thrilling world of crypto trading!</p><h3>Conclusion:</h3><p>In conclusion, the emergence of our Python-powered trading bot represents a quantum leap forward in the realm of algorithmic trading. By addressing the limitations inherent in existing bots and leveraging the unparalleled capabilities of machine learning and neural networks, our bot offers a trans-formative solution for traders seeking to navigate the complexities of the cryptocurrency markets with confidence and precision. With its unparalleled flexibility, multi-asset trading capabilities, real-time decision-making prowess, and integration of cutting-edge technologies, our bot stands poised to revolutionize the way traders approach automated trading. Embrace the future of trading with our bot and unlock a world of possibilities in the ever-expanding realm of cryptocurrency trading.</p><h4><strong>Bot Link</strong> — <a href="https://www.patreon.com/pppicasso/shop/vishvaalgo-v2-0-live-crypto-trading-156496">https://www.patreon.com/pppicasso/shop/vishvaalgo-v2-0-live-crypto-trading-156496</a></h4><p>Warm Regards,</p><p><strong>Puranam Pradeep Picasso</strong></p><p><strong>Linkedin</strong> — <a href="https://www.linkedin.com/in/puranampradeeppicasso/">https://www.linkedin.com/in/puranampradeeppicasso/</a></p><p><strong>Patreon </strong>— <a href="https://patreon.com/pppicasso">https://patreon.com/pppicasso</a></p><p><strong>Facebook </strong>— <a href="https://www.facebook.com/puranam.p.picasso/">https://www.facebook.com/puranam.p.picasso/</a></p><p><strong>Twitter</strong> — <a href="https://twitter.com/picasso_999">https://twitter.com/picasso_999</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3ce4a81d3e18" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>