Inspiration

Our project aims to democratize algorithmic trading and the data associated with it to capture a 150 billion dollar a month trade volume and aim it towards transparency and purpose. Our project is a culmination of the technical depth and breadth of a new step forward in technology. We really wanted to open up the centralized and ever exclusive profession of algorithmic trading, giving the average retail trader the same tools and data as billion dollar companies. Empowering curiosity and innovation through open sourced data and tools.

What it does

Our project is an brokerage platform that hosts compute, data APIs, processing tools, algorithms, and data-stream. A mix of the usability of Robinhood with the decentralized community building of technical people like Kaggle. We transform strategies and ideas that require a huge amount of capital and expertise, and hand it to the every-day retail investor. When customers upload code, our platform allows them to test their ideas out on paper trades, and when they are confident enough, we host and execute with a small fee on our infrastructure.

There are two distinct properties of our project that stand out as unique:

  1. Algorithm Marketplace: Anyone can create an algorithm and earn commission by allowing others to run the algorithm. This means investors can invest in unorthodox and unique public algorithms, which removes all financial and technological barriers young analysts and programmers without any capital might face. By doing so, the project opens up the investment community to a new diverse and complex set of financial products.

  2. Collaborative Data Streams: All users are encouraged to connect their individual data streams, growing a community that helps each other innovate and develop refined and reliable sources of data. This can serve as both a gateway to accessibility and a lens of transparency, enabling users to track and encourage responsible investments, allowing users to monitor and invest in entities that emphasize certain causes such as sustainability or other social movements.

How we built it

Our project was specifically made in two stages, the framework and then the use cases. We used our combined experience at Bloomberg and Amazon working with this type of data to create a framework that is both highly optimized and easy to use. Below, we highlight three use case examples that were developed on our platform.

Use Case 1: Technical Analysis Using Independent Component Analysis (ICA)

  1. Traditional Stock Analysis Using ICA:

    • We utilize Independent Component Analysis (ICA) to decompose a data matrix 𝑋 (observations) into independent components. The goal is to estimate 𝐴 (mixing matrix) and 𝑆 (source signals), assuming 𝑆 contains statistically independent components.
    • ICA maximizes non-Gaussianity (e.g., using kurtosis or negentropy) to ensure independence, allowing us to identify independent forces or components in mixed signals that contribute to changes in the overall system.
  2. Cosine Similarity Between Stocks:

    • By analyzing the independent components driving the stock prices, we perform cosine similarity between them. This generates a value within the range of [-1, 1], representing how much any two stocks share these independent components.
  3. Dynamic Graph Representation:

    • We build an updating graph based on the relationships derived from the cosine similarity, providing real-time insight into how stocks are interrelated through their independent components.

Use Case 2: Prediction Algorithm

  • Our second use case involves a prediction algorithm that tracks stock movement and applies trend-based estimation across various stocks.
  • This demonstrates a low latency real-time application, emphasizing the capability of SingleStore for handling real-time database operations, and showing how the platform can support high-speed, real-time financial data processing.

Challenges we ran into

We encountered several challenges, including latency issues, high costs, and difficulties with integrating real-time data processing due to rate limits and expenses. Another hurdle was selecting the right source for real-time stock data, as both maintaining the database and processing the data were costly, with the data stream alone costing nearly $60.

Accomplishments that we're proud of

We collectively managed to create a framework that is impressive on a technical scale and scalable as we look into the future of this project.

What we learned

We gained experience with data normalization techniques for stock data and learned how to sync time series datasets with missing information. We also had to think deeply about the scalability of our platform and the frictionless experience we wanted to present.

What's next for The OpenTradeLab

We have several initiatives we're excited to work on:

  1. Growing Communities Around Social Investments:

    • We aim to explore sustainable ways to build and foster communities focused on investment in social causes.
  2. Direct Exchange Connectivity:

    • We're looking into the possibility of connecting directly to an exchange to enable real-time trade routing.
  3. Optimized Code Conversion:

    • We plan to develop an API and library that converts Python code into optimized C++ code for enhanced performance.
  4. Investment Safeguards:

    • Implementing safeguards to promote responsible and secure investment practices is another key area of focus.
Share this project:

Updates