<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>IPFS Cluster News</title>
    <link>https://ipfscluster.io/news/</link>
    
    <description>Recent news on IPFS Cluster</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Tue, 22 Mar 2022 00:00:00 +0000</lastBuildDate>
    
        <atom:link href="https://ipfscluster.io/news/index.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>State of the clusters: March 2022</title>
      <link>https://ipfscluster.io/news/state-of-the-clusters-march-2022/</link>
      <pubDate>Tue, 22 Mar 2022 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/state-of-the-clusters-march-2022/</guid>
      <description>State of the clusters: March 2022 Two months have passed since our last update on the &amp;ldquo;state of the clusters&amp;rdquo;. In our previous post I mentioned we were tracking 25 million pins on a 9-peer cluster.
Today that cluster (which stores content for NFT.storage) has grown to 18 peers and 50 million pins. Our average usage rate keeps at around 4 new pins per second.
The new peers were added and were able to sync the cluster pinset in about 24 hours.</description>
      <content:encoded><![CDATA[

<h2 id="state-of-the-clusters-march-2022">State of the clusters: March 2022</h2>

<p>Two months have passed since our <a href="../state-of-the-clusters-jan-2022/">last update</a>
on the &ldquo;state of the clusters&rdquo;. In our previous post I mentioned we were
tracking 25 million pins on a 9-peer cluster.</p>

<p>Today that cluster (which stores content for
<a href="https://nft.storage">NFT.storage</a>) has grown to <strong>18 peers and 50 million
pins</strong>. Our average usage rate keeps at around 4 new pins per second.</p>

<p>The new peers were added and were able to sync the cluster pinset in about 24
hours. This is a cluster with a crdt-DAG-depth 500k, which, given the multiple
branches, likely involved syncing millions of CRDT-dag blocks. Because the new
peers are empty and have more space that the older ones, they started storing
and taking the load, relieving others as intended (older ones have up to 70TB
of data pinned).</p>

<p>In the last version (v0.14.5), which we rolled out everywhere, we included
some changes to improve performance and CRDT-DAG syncing. We have also started
rebuilding older nodes with <strong>LVM-striped, XFS and flatfs/next-to-last-3
datastore layout configuration for IPFS</strong>. In our experience, XFS performs
better than Ext4 for folder with large number of files, which is essentially
what flatfs does. Next-to-last-3 is a sharding strategy that shards blocks
over folders with 3 letters (the default is 2). By having more shards, there
are less items on every folder, which is better for very large nodes.</p>

<p>The main issue now preventing unbounded scalability is that the huge pinset
causes RAM memory spikes whenever a cluster peer needs to check that the pins
that are supposed to on ipfs are actually there. This is because every item on
the pinset is loaded on memory to be able to iterate on them. At this point,
the memory spikes are very noticeable and steal memory which IPFS would gladly
use.</p>

<p>The next release of IPFS Cluster will address this and other issues through a
major shift on how things work internally, which will not only fix the memory
spikes, but also unlock lots of performance gains when adding content to
cluster peers. With these changes, IPFS Cluster will graduate to
version 1.0.0, having proven its reliability and scalability properties while
serving production infrastructure.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>State of the clusters: January 2022</title>
      <link>https://ipfscluster.io/news/state-of-the-clusters-jan-2022/</link>
      <pubDate>Mon, 17 Jan 2022 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/state-of-the-clusters-jan-2022/</guid>
      <description>State of the clusters: January 2022 Today, we would like to provide a few details and figures on where we are with regards to cluster scalability, particularly as ensuring IPFS storage allocation and replication behind the NFT.storage platform.
We have started 2022 with a new release (v0.14.4). A few months ago, we were happy to report that we were tracking around 2 million pins.
Today, cluster is tracking over 25 million pins for NFT.</description>
      <content:encoded><![CDATA[

<h2 id="state-of-the-clusters-january-2022">State of the clusters: January 2022</h2>

<p>Today, we would like to provide a few details and figures on where we are with
regards to cluster scalability, particularly as ensuring IPFS storage
allocation and replication behind the <a href="https://nft.storage">NFT.storage</a>
platform.</p>

<p>We have started 2022 with a
<a href="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">new release (v0.14.4)</a>. <a href="../0.13.3_nft_storage/">A few months ago</a>,
we were happy to report that we were tracking around 2 million pins.</p>

<p>Today, cluster is tracking over <strong>25 million pins</strong> for NFT.storage in a
single cluster, made of <strong>9 peers</strong> with around 85TB of storage each running
go-ipfs v0.12.0-rc1. On average, we are ingesting more than 4 items per second
(normally add-requests that put the content directly on the cluster). We
know we can take <em>many hundreds of pins per second</em> when needed.</p>

<p>These numbers are not overly impressive when compared with, for example a
PostgreSQL instance for pinset tracking, but we understand cluster as a
distributed application with seamless pinset syncing which also supports
things like follower-clusters and scalability to hundreds of peers based on
its pubsub+crdt pinset distribution mechanisms.</p>

<p>In terms of configuration, we have set the cluster peer to let IPFS pin up to
<strong>8 items in parallel</strong>. This is what we found was a well-performing value
when going through pinning queues of several million items. Bitswap
performance, disk usage, network bandwidth all affect the right values. The
cluster-peers are configured using the <em>crdt</em> consensus mode, with
<strong>replication factors set to 3</strong>. Each node is tagged with a <strong>datacenter</strong>
tag, and the allocator is set to allocate per datacenter and free-space. Thus,
we get global distribution of every pin, which are then allocated to the peers
with most free space in each DC. We make use of the crdt-batching function,
creating commits every 300 items or 10 seconds (although we tune them as we
need, sometimes increasing the batch size or delay). For reference, one batch
(crdt-delta) can fit almost 4000 pins with 3 allocations (actual number
depends on the pin options and allocations).</p>

<p>The 20x pinset growth in the last few months has necessarily been accompanied
by several releases to get IPFS Cluster up to the task of handling
multi-million setups:</p>

<ul>
<li>The cluster-peer datastore can be setup with LevelDB and Badger, and the latter
is GC&rsquo;ed regularly so that it does not grow to take too much space per pin.</li>
<li>We heavily sped up operations reading the full pinset (<code>pin ls</code> or
<code>status</code>). For example, it is now very efficient to check all the pins in
error or queued states because filtering has been improved. Listing all pins
in the state has improved an order of magnitude.</li>
<li>State export and import functions have also been improved to allow for
cluster pinsets to be moved around (to new clusters), which facilitates
maintenance, for example by setting new allocations for pins.</li>
</ul>

<p>The next steps are to keep iterating towards supporting much larger
pinsets. One of the improvements in the pipeline will be streaming-RPC support
(<a href="../cluster_rpc_components/">cluster components communicate via RPC</a>). This
will allow us to speed up many operations, such as listing or adding to the
cluster.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>NFT.storage - powered by IPFS Cluster v0.13.3</title>
      <link>https://ipfscluster.io/news/0.13.3_nft_storage/</link>
      <pubDate>Fri, 14 May 2021 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/0.13.3_nft_storage/</guid>
      <description>20210514 | NFT.storage - powered by IPFS Cluster v0.13.3 Filecoin recently announced the launch of NFT.storage, a pinning service to provide perpetual IPFS storage specifically catered to NFT creators and collectors.
The service is backed by storage provided by Pinata and Protocol Labs, with the service on the Protocol Labs side relying on IPFS Cluster for pin tracking and replication.
The service has been setup as a collaborative cluster with 3 main storage peers run by Protocol Labs.</description>
      <content:encoded><![CDATA[

<h2 id="20210514-nft-storage-powered-by-ipfs-cluster-v0-13-3">20210514 | NFT.storage - powered by IPFS Cluster v0.13.3</h2>

<p>Filecoin recently announced the launch of <a href="https://nft.storage">NFT.storage</a>,
a pinning service to provide <a href="https://filecoin.io/blog/posts/introducing-nft.storage-free-decentralized-storage-for-nfts/">perpetual IPFS storage specifically catered to
NFT creators and
collectors</a>.</p>

<p>The service is backed by storage provided by Pinata and Protocol Labs, with
the service on the Protocol Labs side relying on IPFS Cluster for pin tracking and
replication.</p>

<p>The service has been setup as a
<a href="https://collab.ipfscluster.io">collaborative cluster</a> with 3 main storage
peers run by Protocol Labs. The Cluster currently tracks and pins 1.900.000+ items,
including many existing NFTs from around the web, which are preserved for posterity.</p>

<p>To better support the requirements of the project, a couple of upgrades have
been added to IPFS Cluster:</p>

<ul>
<li><p>First, we have enabled batch-pin ingest in CRDT-mode. This allowed us to
easily ingest over 400.000 pins to the cluster in less than 1 hour, with a
very low cost to the system. From that point, the cluster peers make sure
that IPFS pins the items in an orderly fashion, restarting stuck pins as needed.</p></li>

<li><p>Second, we have added the possibility of adding arbitrary DAGs to the
cluster directly, by enabling CAR-file imports on the <code>/add</code> endpoint. This
powers the storage of CBOR-encoded DAGs, that include metadata and links to
the actual NFT-material.</p></li>
</ul>

<p>These features have been included in
<a href="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">IPFS Cluster 0.13.3</a>,
which we just released.</p>

<p>Happy pinning (and now, at very high rates)!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Release 0.13.1 and current state of the project</title>
      <link>https://ipfscluster.io/news/0.13.1_release/</link>
      <pubDate>Thu, 14 Jan 2021 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/0.13.1_release/</guid>
      <description>20210114 | Release 0.13.1 and current state of the project We just released IPFS Cluster 0.13.1, with some bugfixes, dependency upgrades and a couple of improvements.
While development efforts have been moved to other parts of the ecosystem in the last few months, the IPFS Project continues being maintained, although without active development of large features. What users can expect is:
 Support over the common channels, responses to issues etc.</description>
      <content:encoded><![CDATA[

<h2 id="20210114-release-0-13-1-and-current-state-of-the-project">20210114 | Release 0.13.1 and current state of the project</h2>

<p>We just released
<a href="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">IPFS Cluster 0.13.1</a>,
with some bugfixes, dependency upgrades and a couple of improvements.</p>

<p>While development efforts have been moved to other parts of the ecosystem in
the last few months, the IPFS Project continues being maintained, although
without active development of large features. What users can expect is:</p>

<ul>
<li><a href="/support/">Support</a> over the common channels, responses to issues etc.</li>
<li>Bugfixes and pull request reviews.</li>
<li>Dependency upgrades and project maintenance, with a slow but stable release candence.</li>
</ul>

<p>IPFS Cluster continues to be used in production to replicate and distribute
important data hosted on IPFS. Happy pinning!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Release 0.12.0</title>
      <link>https://ipfscluster.io/news/0.12.0_release/</link>
      <pubDate>Fri, 20 Dec 2019 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/0.12.0_release/</guid>
      <description>20201220 | Release 0.12.0 IPFS Cluster 0.12.0 is here! It comes with the new ipfs-cluster-follow application, a super-easy way of launching a &amp;ldquo;follower&amp;rdquo; peer.
Follower cluster peers join clusters to participate in the replication and distribution of IPFS content, but do not have permissions to modify the Cluster peerset or perform actions on other peers of the Cluster. When running ipfs-cluster-follow, peers are automatically configured with a template configuration fetched through IPFS (or any HTTP url) and run with some follower-optimized parameters.</description>
      <content:encoded><![CDATA[

<h2 id="20201220-release-0-12-0">20201220 | Release 0.12.0</h2>

<p>IPFS Cluster 0.12.0 is here! It comes with the new <code>ipfs-cluster-follow</code>
application, a super-easy way of launching a &ldquo;follower&rdquo; peer.</p>

<p>Follower cluster peers join clusters to participate in the replication and
distribution of IPFS content, but do not have permissions to modify the
Cluster peerset or perform actions on other peers of the Cluster. When running
<code>ipfs-cluster-follow</code>, peers are automatically configured with a template
configuration fetched through IPFS (or any HTTP url) and run with some
follower-optimized parameters. Additionally, <code>ipfs-cluster-follow</code> can setup
and run multiple peers in parallel, so users can subscribe to several clusters
at the same time.</p>

<div class="tipbox tip">Minor release 0.12.1 contains some minor fixes to <code>ipfs-cluster-follow</code>.</div>

<p>Would you like to try it out? Grab <a href="https://dist.ipfs.io/#ipfs-cluster-follow">ipfs-cluster-follow</a> and run:</p>

<pre><code class="language-sh">./ipfs-cluster-follow ipfs-websites init ipfs-websites.collab.ipfscluster.io
./ipfs-cluster-follow ipfs-websites run
</code></pre>

<script id="asciicast-289914" src="https://asciinema.org/a/289914.js" async></script>

<p>Your IPFS daemon will start pinning a list of IPFS-related websites (you will
need about 600MB of available space). You can stop and re-start your followers
any time and they will catch up to the latest state of things.</p>

<p>We have also added a bunch of new features. Pins can now have expiration times
so that they are automatically unpinned at some point. And Cluster operators
can now use the Cluster-GC command to trigger garbage collections on all the
managed IPFS daemons. These should be very useful for IPFS storage providers.</p>

<p>Finally, users running clusters behind NATs or in Dockerized environments will
benefit from improvements in NAT traversal and connectivity. Cluster peers now
support the new libp2p QUIC transport and TLS handshake.</p>

<p>You can read more information about all the new things for this release in the
<a href="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">changelog</a>.</p>

<p>We hope the possibilities opened by this new release will make IPFS Cluster a
very useful tool for hosting and re-distributing IPFS data in a collaborative
manner, building communities around archives based on user interests and
strenghthening content distribution in the IPFS network by doing so.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Release 0.11.0</title>
      <link>https://ipfscluster.io/news/0.11.0_release/</link>
      <pubDate>Tue, 01 Oct 2019 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/0.11.0_release/</guid>
      <description>20191001 | Release 0.11.0 A few days ago we shipped IPFS Cluster 0.11.0. This was a huge leap forward as it finally crystallizes the journey to replace Raft with a system that allows peers to come and go freely from a cluster while keeping consistency guarantees on the shared pinset. The effort to find a suitable replacement started almost a year ago and resulted in a new crdt component that is based on go-ds-crdt, a datastore implementation using Merkle-CRDTs.</description>
      <content:encoded><![CDATA[

<h2 id="20191001-release-0-11-0">20191001 | Release 0.11.0</h2>

<p>A few days ago we shipped IPFS Cluster 0.11.0. This was a huge leap forward as
it finally crystallizes the journey to replace Raft with a system that allows
peers to come and go freely from a cluster while keeping consistency
guarantees on the shared pinset. The effort to find a suitable replacement
started almost a year ago and resulted in a new <code>crdt</code> component that is based
on <a href="https://github.com/ipfs/go-ds-crdt">go-ds-crdt</a>, a datastore
implementation using
<a href="https://hector.link/presentations/merkle-crdts/merkle-crdts.pdf">Merkle-CRDTs</a>.</p>

<p>As mentioned in the
<a href="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">changelog</a>,
version 0.11.0 is the biggest release in the project&rsquo;s history and it comes with
many other features and improvements.</p>

<p>We have also started running IPFS Cluster workshops on several conferences. We
keep an updated list of past and upcoming events at our
<a href="https://github.com/ipfs-cluster/workshops">workshops repository</a>. In these
workshops, participants install and run cluster peers with a IPFS-hosted
configuration and they automatically discover each other, form a Cluster
and try out all commands.</p>

<p>During the upcoming months we will be shipping more features but also start
taking advantage of IPFS Cluster&rsquo;s new features by launching public
collaborative clusters: we will publish instructions for anyone to join
specific clusters to backup pieces of important IPFS data such as the
distributions page, wikipedia mirrors or community websites.</p>

<p>We wish you a lot of success using the latest version of IPFS Cluster.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Release 0.10.0</title>
      <link>https://ipfscluster.io/news/0.10.0_release/</link>
      <pubDate>Thu, 07 Mar 2019 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/0.10.0_release/</guid>
      <description>20190307 | Release 0.10.0 Today we release 0.10.0, a release with major changes under the hood that will make IPFS Cluster perform significantly faster with large pinsets and less memory demanding.
For those upgrading, this release is a mandatory step before any future upgrades, as it will upgrade the internal state to a new format which prepares the floor for the upcoming addition of an alternative CRDT-based &amp;ldquo;consensus&amp;rdquo; component. The new component will increase IPFS Cluster scalability orders of magnitude and unlock collaborative Cluster where random invididuals can collaborate in replicating content.</description>
      <content:encoded><![CDATA[

<h2 id="20190307-release-0-10-0">20190307 | Release 0.10.0</h2>

<p>Today we release 0.10.0, a release with major changes under the hood that will
make IPFS Cluster perform significantly faster with large pinsets and less
memory demanding.</p>

<p>For those upgrading, this release is a mandatory step before any future
upgrades, as it will upgrade the internal state to a new format which prepares
the floor for the upcoming addition of an alternative CRDT-based &ldquo;consensus&rdquo;
component. The new component will increase IPFS Cluster scalability orders of
magnitude and unlock collaborative Cluster where random invididuals can
collaborate in replicating content.</p>

<p>We also have a few new features:</p>

<ul>
<li>Path resolving before pinning and unpinning</li>
<li>Ability to manually specify pin allocations</li>
<li>Environment variable override to all configuration options</li>
<li>Added the possibility to store custom metadata with all pins</li>
</ul>

<p>Finally, the Cluster team would like to thank
<a href="https://github.com/alekswn">@alekswn</a> and
<a href="https://github.com/roignpar">@roignpar</a> for their awesome contributions!</p>

<p>Be sure to check the
<a href="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">changelog</a> for
a detailed overview of changes and <strong>upgrade notices</strong>.</p>

<p>Happy pinning!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Release 0.9.0</title>
      <link>https://ipfscluster.io/news/0.9.0_release/</link>
      <pubDate>Mon, 18 Feb 2019 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/0.9.0_release/</guid>
      <description>20190218 | Release 0.9.0 IPFS Cluster version 0.9.0 comes with one big new feature, OpenCensus support! This allows for the collection of distributed traces and metrics from the IPFS Cluster application as well as supporting libraries. Currently, we support the use of Jaeger as the tracing backend and Prometheus as the metrics backend. Support for other OpenCensus backends will be added as requested by the community. Please file an issue if you would like to see a particular backend supported.</description>
      <content:encoded><![CDATA[

<h2 id="20190218-release-0-9-0">20190218 | Release 0.9.0</h2>

<p>IPFS Cluster version 0.9.0 comes with one big new feature, <a href="https://opencensus.io">OpenCensus</a> support! This allows for the collection of distributed traces and metrics from the IPFS Cluster application as well as supporting libraries. Currently, we support the use of <a href="https://jaegertracing.io">Jaeger</a> as the tracing backend and <a href="https://prometheus.io">Prometheus</a> as the metrics backend. Support for other <a href="https://opencensus.io/exporters/">OpenCensus backends</a> will be added as requested by the community. Please file an issue if you would like to see a particular backend supported. We are looking forward to digging deeper into how IPFS Cluster peers operate and communicate with each other and accurately measuring how they are performing in real world deployments.</p>

<p>The one other significant change that comes with the 0.9.0 release is the removal of the Snap distribution of IPFS Cluster. Due to difficulties in getting Snap builds to work reliably without a disproportionate amount of time spent debugging them, we decided to deprecate the distribution mechanism.</p>

<p>Happy Measured Pinning!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Release 0.8.0</title>
      <link>https://ipfscluster.io/news/0.8.0_release/</link>
      <pubDate>Wed, 16 Jan 2019 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/0.8.0_release/</guid>
      <description>20190116 | Release 0.8.0 Since the beginning of IPFS Cluster, one of our ideas was that it should be easily dropped in place of the IPFS daemon in any integration. This was achieved by adding an IPFS Proxy endpoint which essentially provides an IPFS-compatible API for Cluster. Those endpoints and operations which does not make sense to be handled by Cluster are simply forwarded to the underlying daemon. Add, pin and unpin operations become, however, Cluster action.</description>
      <content:encoded><![CDATA[

<h2 id="20190116-release-0-8-0">20190116 | Release 0.8.0</h2>

<p>Since the beginning of IPFS Cluster, one of our ideas was that it should be
easily dropped in place of the IPFS daemon in any integration. This was
achieved by adding an IPFS Proxy endpoint which essentially provides an
IPFS-compatible API for Cluster. Those endpoints and operations which does not
make sense to be handled by Cluster are simply forwarded to the underlying
daemon. Add, pin and unpin operations become, however, Cluster action.</p>

<p>IPFS Cluster 0.8.0 comes out today and includes a revamp of the IPFS Proxy
endpoint. We have promoted it to be its own API-type component, extracting it
from the IPFS Connector (which is just the client to IPFS). We have
additionally made improvements so that it truly mimics IPFS, by dynamically
extracting headers from the real daemon that can be reused in the responses
handled by Cluster. Thus, there will be no CORS-related breakage when swapping
out IPFS for Cluster, and custom IPFS headers (i.e. <code>X-Ipfs-Gateway</code>) can be
configured and forwarded by the proxy.</p>

<p>The increasing importance of browser integrations prompted us to fully support
Cross-Origin Resource Sharing (CORS) in the REST API as well. It will now handle
CORS pre-flight requests (OPTIONS) and the configuration allows the user to set up
all the CORS-related headers as needed.</p>

<p>This is the first release with @kishansagathiya as full-time member of the
team. Apart from extracting the IPFS proxy component, Kishan is behind the new
<code>--filter</code> flag for <code>ipfs-cluster-ctl status</code>. You can now list all the items
that are <code>pinning</code> or in <code>error</code> without the need of a complex grep line. A
few more useful features will be coming up in the future.</p>

<p>For the full list of changes and update notices for this release, check out the
<a href="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">changelog</a>.</p>

<p>Happy pinning!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Release 0.7.0</title>
      <link>https://ipfscluster.io/news/0.7.0_release/</link>
      <pubDate>Wed, 31 Oct 2018 00:00:00 +0000</pubDate>
      
      <guid>https://ipfscluster.io/news/0.7.0_release/</guid>
      <description>20181031 | Release 0.7.0 We are proud to introduce the 0.7.0 release today. It comes with a few small improvements and bugfixes.
We have slightly changed the /add endpoint response format in a non-compatible way, to return more adequate objects than the ones mimic-ing the IPFS API. It&amp;rsquo;s not the best but, better now than later.
We have also fixed the proxy /add endpoint to work correctly with the IPFS Companion extension and js-ipfs-api.</description>
      <content:encoded><![CDATA[

<h2 id="20181031-release-0-7-0">20181031 | Release 0.7.0</h2>

<p>We are proud to introduce the 0.7.0 release today. It comes with a few small improvements and bugfixes.</p>

<p>We have slightly changed the <code>/add</code> endpoint response format in a non-compatible way, to return more adequate objects than the ones mimic-ing the IPFS API. It&rsquo;s not the best but, better now than later.</p>

<p>We have also fixed the proxy <code>/add</code> endpoint to work correctly with the IPFS Companion extension and <code>js-ipfs-api</code>. Thanks to @lidel for helping figuring out the problem!</p>

<p>Regarding features, @kishansagathiya has been making a few contributions lately and now, among other features, we have new commands like <code>ipfs-cluster-ctl health metrics freespace</code> which show the list of last received <code>freespace</code> metrics and their validity.</p>

<p>Finally, we have included a default <code>docker-compose.yml</code> template, which launches a stack with 2 ipfs daemons and 2 cluster peers.</p>

<p>As usual, for the full list of changes and update notices, check out the <a href="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">changelog</a>.</p>
]]></content:encoded>
    </item>
    
  </channel>
</rss>
