Percona Server for MongoDB 3.6.14-3.4 Now Available

https://www.percona.com/blog/2019/10/10/percona-server-for-mongodb-3-6-14-3-4-available/

https://www.percona.com/blog/?p=62667

Percona Server for MongoDB 3.6.14-3.4

Percona Server for MongoDB 3.6.14-3.4Percona announces the release of Percona Server for MongoDB 3.6.14-3.4 on October 10, 2019. Download the latest version from the Percona website or the Percona software repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.6 Community Edition. It supports MongoDB 3.6 protocols and drivers.

Percona Server for MongoDB extends Community Edition functionality by including the Percona Memory Engine storage engine, as well as several enterprise-grade features. Also, it includes MongoRocks storage engine, which is now deprecated. Percona Server for MongoDB requires no changes to MongoDB applications or code.

Percona Server for MongoDB 3.6.14-3.4 is based on MongoDB 3.6.14. In this release, the license of RPM and DEB packages has been changed from AGPLv3 to SSPL.

Bugs Fixed

[Error: Irreparable invalid markup ('<a [...] noreferrer">') in entry. Owner must fix manually. Raw contents below.]

<p class="ljsyndicationlink"><a href="https://www.percona.com/blog/2019/10/10/percona-server-for-mongodb-3-6-14-3-4-available/">https://www.percona.com/blog/2019/10/10/percona-server-for-mongodb-3-6-14-3-4-available/</a></p><p class="ljsyndicationlink"><a href="https://www.percona.com/blog/?p=62667">https://www.percona.com/blog/?p=62667</a></p><img width="200" height="112" src="https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-200x112.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="Percona Server for MongoDB 3.6.14-3.4" style="display: block; margin-bottom: 5px; clear:both;max-width: 100%;" link_thumbnail="" srcset="https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-200x112.jpg 200w, https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-300x168.jpg 300w, https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-1024x572.jpg 1024w, https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-367x205.jpg 367w, https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4.jpg 1200w" sizes="(max-width: 200px) 100vw, 200px" /><p><img class="alignright size-medium wp-image-62714" src="https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-300x168.jpg" alt="Percona Server for MongoDB 3.6.14-3.4" width="300" height="168" srcset="https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-300x168.jpg 300w, https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-200x112.jpg 200w, https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-1024x572.jpg 1024w, https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4-367x205.jpg 367w, https://www.percona.com/blog/wp-content/uploads/2019/10/Percona-Server-for-MongoDB-3.6.14-3.4.jpg 1200w" sizes="(max-width: 300px) 100vw, 300px" />Percona announces the release of Percona Server for MongoDB 3.6.14-3.4 on October 10, 2019. Download the latest version from the <a target="_blank" href="https://www.percona.com/downloads/percona-server-mongodb-3.6/">Percona website</a> or the <a target="_blank" href="https://www.percona.com/doc/percona-server-for-mongodb/3.6/install/index.html">Percona software repositories</a>.</p> <p>Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.6 Community Edition. It supports MongoDB 3.6 protocols and drivers.</p> <p>Percona Server for MongoDB extends Community Edition functionality by including the <a target="_blank" href="https://www.percona.com/doc/percona-server-for-mongodb/3.6/inmemory.html#inmemory">Percona Memory Engine</a> storage engine, as well as several enterprise-grade features. Also, it includes <a target="_blank" href="https://www.percona.com/doc/percona-server-for-mongodb/3.6/mongorocks.html#mongorocks">MongoRocks</a> storage engine, which is now deprecated. Percona Server for MongoDB requires no changes to MongoDB applications or code.</p> <p>Percona Server for MongoDB 3.6.14-3.4 is based on <a target="_blank" href="https://docs.mongodb.com/manual/release-notes/3.6/#aug-26-2019">MongoDB 3.6.14</a>. In this release, the license of <em>RPM</em> and <em>DEB</em> packages has been changed from <a target="_blank" href="http://www.fsf.org/licensing/licenses/agpl-3.0.html">AGPLv3</a> to <a target="_blank" href="https://www.mongodb.com/licensing/server-side-public-license">SSPL</a>.</p> <h2>Bugs Fixed</h2> <ul> <li><a target="_blank" href="https://jira.percona.com/browse/PSMDB-447" target="_blank" rel="&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;noopener&quot; noopener noreferrer" noopener noreferrer">PSMDB-447</a>: The license for RPM and DEB packages has been changed from AGPLv3 to SSPL.</li> </ul> <p>The Percona Server for MongoDB 3.6.14-3.4 release notes are available in the <a target="_blank" href="https://www.percona.com/doc/percona-server-for-mongodb/3.6/release_notes/index.html">official documentation</a>.</p>

Percona Monitoring and Management (PMM) 2.0.1 Is Now Available

https://www.percona.com/blog/2019/10/09/percona-monitoring-and-management-pmm-2-0-1-is-now-available/

https://www.percona.com/blog/?p=62660

Percona Monitoring and Management 2.0.1

Percona Monitoring and Management 2.0.1Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring your database performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MariaDB®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

In this release, we are introducing the following PMM enhancements:

  • Securely share Dashboards with Percona – let Percona engineers see what you see!
  • Improved navigation – Now PMM remembers which host and service, and applies these as filters when navigating to Query Analytics

Securely share Dashboards with Percona

A dashboard snapshot is a way to securely share what you’re seeing, with Percona. When created, we strip sensitive data like queries (metrics, template variables, and annotations) along with panel links. The shared dashboard will only be available for Percona engineers to view as it will be protected by Percona’s two-factor authentication system. The content on the dashboard will assist Percona engineers in troubleshooting your case.

Improved navigation

Now when you transition from looking at metrics into Query Analytics, PMM will remember the host and service, and automatically apply these as filters:

Improvements

  • PMM-4779: Securely share dashboards with Percona
  • PMM-4735: Keep one old slowlog file after rotation
  • PMM-4724: Alt+click on check updates button enables force-update
  • PMM-4444: Return “what’s new” URL with the information extracted from the pmm-update package changelog

Fixed bugs

  • PMM-4758: Remove Inventory rows from dashboards
  • PMM-4757qan_mysql_perfschema_agent failed querying events_statements_summary_by_digest due to data types conversion
  • PMM-4755: Fixed a typo in the InnoDB AHI Miss Ratio formula
  • PMM-4749: Navigation from Dashboards to QAN when some Node or Service was selected now applies filtering by them in QAN
  • PMM-4742: General information links were updated to go to PMM 2 related pages
  • PMM-4739: Remove request instances list
  • PMM-4734: A fix was made for the collecting node_name formula at MySQL Replication Summary dashboard
  • PMM-4729: Fixes were made for formulas on MySQL Instances Overview
  • PMM-4726: Links to services in MongoDB singlestats didn’t show Node name
  • PMM-4720machine_id could contain trailing \n
  • PMM-4640: It was not possible to add MongoDB remotely if password contained a # symbol

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

Blog Poll: Adding/Upgrading Instances, Hardware, and Migration

https://www.percona.com/blog/2019/10/08/blog-poll-adding-upgrading-instances-hardware-and-migration/

https://www.percona.com/blog/?p=62638

Time for a new question in our blog poll series! This time, it’s about adding or upgrading to meet database needs.  Here’s the question: In the last 24 months, how often have you added or upgraded database instances, added hardware to existing servers, or migrated to a new hosting/cloud provider?

Last year, we asked you a few questions in a blog poll and we received a great amount of feedback. We wanted to follow up on those some of those same survey questions to see what may have changed. We’d love to hear from you!

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

This poll will be up for one month and will be maintained over in the sidebar should you wish to come back at a later date and take part. We look forward to seeing your responses!

 

Achieving Disaster Recovery with Percona XtraDB Cluster

https://www.percona.com/blog/2019/10/07/achieving-disaster-recovery-with-percona-xtradb-cluster/

https://www.percona.com/blog/?p=62510

Disaster Recovery with Percona XtraDB Cluster

Disaster Recovery with Percona XtraDB ClusterOne thing that comes up often from working with a variety of clients at Percona is “How can I achieve a Disaster Recovery (DR) solution with Percona XtraDB Cluster (PXC)?”  Unfortunately, decisions are sometimes made with far-reaching consequences by individuals who often do not well understand the architecture and its limitations.  As a Technical Account Manager (TAM), I am often engaged to help clients look for better solutions, or at least try to help them with it by mitigating as many issues as possible.  Clearly, in a perfect world, we would like to get the right experts involved in these types of discussions to ensure more appropriate solutions, but we all know this is not a perfect world.

One such example involves the idea that if we take a PXC cluster and split it into two datacenters with two nodes in a primary datacenter and one node in a separate datacenter, we will have a hot standby node at all times.  In this case, the application can be pointed to the third node in the event of something catastrophic.  This sounds great…in theory.  The problem is latency.

Latency can cripple a PXC cluster

By design, PXC is meant to work with nodes that can communicate with one another quickly.  The underlying cluster technology, known as Galera, is considered “virtually synchronous” in nature.  In this architecture, writesets are replicated to all active nodes in the cluster at transaction commit and will go into a queue.  Next, each node performs a certification of the writeset which is deterministic in nature.  A bug, notwithstanding, each node will either accept or reject the certification in the same manner.  So, either the writeset is applied on all nodes or it is rolled back by all nodes.  What matters in this discussion about the above is the write queue.

As writes come in on one of the nodes, the writesets are replicated to each of the other nodes.  In the above three-node cluster, the writes are certified quickly with the two nodes in the same datacenter.  However, the third node is located in a different datacenter some distance away.  In this case, the writeset must travel across the WAN and will go into a queue (wsrep_local_recv_queue).

So, what’s the problem?

To ensure that one node does not get too far behind the rest of the cluster, any node can send a flow control message to the cluster.  This instructs the cluster to stop replicating new events until the slow node catches up to within some number of writesets as defined by gcs.fc_limit in the configuration.  Essentially, when the number of transactions in the queue exceeds the gcs.fc_limit, flow control messages will be sent and the cluster will stop replicating new writesets.  Unless you have changed it, this will be 16 writesets.

Remember that PXC is virtually synchronous

When replication stops, all nodes stop accepting writes momentarily.  In this event, the system seems to stall until the local recv queue makes some space for new writesets, at which point replication will continue.  This can appear as a stall to the application and leads to huge performance issues.

So, what is a better solution to the above situation?  It is preferable to utilize an asynchronous Slave server replicating from the PXC cluster for failover.  This is the same replication that is built into Percona Server and not Galera.  While this may include adding another server, this could be mitigated by the use of garbd. This process will act as an arbitrator to maintain quorum of the cluster and thus decrease the number of data nodes in PXC by running the lightweight garbd process on an app server or some other server in the environment.  This keeps server count down in cases that are cost-sensitive.

The asynchronous nature means no sending of flow controls from the node in the remote datacenter.  Replication will lag and catch up as needed with the PXC cluster none the wiser.  Because all nodes in the PXC cluster are local to one another, ideally latency is minimal and writesets can be applied much more quickly and stalls minimized.  Of course, this does come with a few challenges.

One challenge is that due to the nature of asynchronous replication, there can be significant lag in the DR node.  This is not always an issue, however, as the only time you are using this server is during a disaster.  The writes will be sent over immediately by the Master, so it is reasonable to expect that the DR node will eventually catch up and you hope to not lose any data, although there are no guarantees.

This brings us to another concern.  Simple asynchronous replication has no guarantee of consistency like PXC provides.  In PXC, there are controls in place to guarantee consistency, but asynchronous replication provides none.  There are, therefore, cases where data drift can occur between Master and Slave.  To mitigate this risk, you can use pt-table-checksum from the Percona Toolkit to detect inconsistency between the Master node of the PXC cluster and the Slave and rectify it with pt-table-sync from the same toolkit.  This, of course, requires that you run this process often.  If it is done as part of an automated process, it should also be monitored to ensure it is being done and whether or not data drift is occurring.

You will also want to monitor that the Master node does not go down, as there is no built-in process of failing the DR node over to a new Master within the PXC cluster.  Our very own Yves Trudeau wrote a utility to manage this, and more information can be found here:

[Error: Irreparable invalid markup ('<a [...] noreferrer">') in entry. Owner must fix manually. Raw contents below.]

<p class="ljsyndicationlink"><a href="https://www.percona.com/blog/2019/10/07/achieving-disaster-recovery-with-percona-xtradb-cluster/">https://www.percona.com/blog/2019/10/07/achieving-disaster-recovery-with-percona-xtradb-cluster/</a></p><p class="ljsyndicationlink"><a href="https://www.percona.com/blog/?p=62510">https://www.percona.com/blog/?p=62510</a></p><img width="200" height="112" src="https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-200x112.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="Disaster Recovery with Percona XtraDB Cluster" style="display: block; margin-bottom: 5px; clear:both;max-width: 100%;" link_thumbnail="" srcset="https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-200x112.jpg 200w, https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-300x168.jpg 300w, https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-1024x572.jpg 1024w, https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-367x205.jpg 367w, https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster.jpg 1200w" sizes="(max-width: 200px) 100vw, 200px" /><p><img class="alignright size-medium wp-image-62613" src="https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-300x168.jpg" alt="Disaster Recovery with Percona XtraDB Cluster" width="300" height="168" srcset="https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-300x168.jpg 300w, https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-200x112.jpg 200w, https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-1024x572.jpg 1024w, https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster-367x205.jpg 367w, https://www.percona.com/blog/wp-content/uploads/2019/09/Disaster-Recovery-with-Percona-XtraDB-Cluster.jpg 1200w" sizes="(max-width: 300px) 100vw, 300px" />One thing that comes up often from working with a variety of clients at Percona is &#8220;How can I achieve a Disaster Recovery (DR) solution with <a target="_blank" href="https://www.percona.com/software/mysql-database/percona-xtradb-cluster">Percona XtraDB Cluster</a> (PXC)?&#8221;  Unfortunately, decisions are sometimes made with far-reaching consequences by individuals who often do not well understand the architecture and its limitations.  As a Technical Account Manager (TAM), I am often engaged to help clients look for better solutions, or at least try to help them with it by mitigating as many issues as possible.  Clearly, in a perfect world, we would like to get the right experts involved in these types of discussions to ensure more appropriate solutions, but we all know this is not a perfect world.</p> <p>One such example involves the idea that if we take a PXC cluster and split it into two datacenters with two nodes in a primary datacenter and one node in a separate datacenter, we will have a hot standby node at all times.  In this case, the application can be pointed to the third node in the event of something catastrophic.  This sounds great&#8230;in theory.  The problem is latency.</p> <h2>Latency can cripple a PXC cluster</h2> <p>By design, PXC is meant to work with nodes that can communicate with one another quickly.  The underlying cluster technology, known as Galera, is considered &#8220;virtually synchronous&#8221; in nature.  In this architecture, writesets are replicated to all active nodes in the cluster at transaction commit and will go into a queue.  Next, each node performs a certification of the writeset which is deterministic in nature.  A bug, notwithstanding, each node will either accept or reject the certification in the same manner.  So, either the writeset is applied on all nodes or it is rolled back by all nodes.  What matters in this discussion about the above is the write queue.</p> <p>As writes come in on one of the nodes, the writesets are replicated to each of the other nodes.  In the above three-node cluster, the writes are certified quickly with the two nodes in the same datacenter.  However, the third node is located in a different datacenter some distance away.  In this case, the writeset must travel across the WAN and will go into a queue (wsrep_local_recv_queue).</p> <h2>So, what&#8217;s the problem?</h2> <p>To ensure that one node does not get too far behind the rest of the cluster, any node can send a flow control message to the cluster.  This instructs the cluster to stop replicating new events until the slow node catches up to within some number of writesets as defined by gcs.fc_limit in the configuration.  Essentially, when the number of transactions in the queue exceeds the gcs.fc_limit, flow control messages will be sent and the cluster will stop replicating new writesets.  Unless you have changed it, this will be 16 writesets.</p> <h2>Remember that PXC is virtually synchronous</h2> <p>When replication stops, all nodes stop accepting writes momentarily.  In this event, the system seems to stall until the local recv queue makes some space for new writesets, at which point replication will continue.  This can appear as a stall to the application and leads to huge performance issues.</p> <p>So, what is a better solution to the above situation?  It is preferable to utilize an asynchronous Slave server replicating from the PXC cluster for failover.  This is the same replication that is built into Percona Server and not Galera.  While this may include adding another server, this could be mitigated by the use of garbd. This process will act as an arbitrator to maintain quorum of the cluster and thus decrease the number of data nodes in PXC by running the lightweight garbd process on an app server or some other server in the environment.  This keeps server count down in cases that are cost-sensitive.</p> <p>The asynchronous nature means no sending of flow controls from the node in the remote datacenter.  Replication will lag and catch up as needed with the PXC cluster none the wiser.  Because all nodes in the PXC cluster are local to one another, ideally latency is minimal and writesets can be applied much more quickly and stalls minimized.  Of course, this does come with a few challenges.</p> <p>One challenge is that due to the nature of asynchronous replication, there can be significant lag in the DR node.  This is not always an issue, however, as the only time you are using this server is during a disaster.  The writes will be sent over immediately by the Master, so it is reasonable to expect that the DR node will eventually catch up and you hope to not lose any data, although there are no guarantees.</p> <p>This brings us to another concern.  Simple asynchronous replication has no guarantee of consistency like PXC provides.  In PXC, there are controls in place to guarantee consistency, but asynchronous replication provides none.  There are, therefore, cases where data drift can occur between Master and Slave.  To mitigate this risk, you can use pt-table-checksum from the Percona Toolkit to detect inconsistency between the Master node of the PXC cluster and the Slave and rectify it with pt-table-sync from the same toolkit.  This, of course, requires that you run this process often.  If it is done as part of an automated process, it should also be monitored to ensure it is being done and whether or not data drift is occurring.</p> <p>You will also want to monitor that the Master node does not go down, as there is no built-in process of failing the DR node over to a new Master within the PXC cluster.  Our very own Yves Trudeau wrote a utility to manage this, and more information can be found here: <a target="_blank" href="https://github.com/y-trudeau/Mysql-tools/tree/master/PXC" target="_blank" rel="&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;nofollow noopener noreferrer" noopener noreferrer">https://github.com/y-trudeau/Mysql-tools/tree/master/PXC</a></p> <h2>Improving Performance</h2> <p>While this solution presents some additional complexity, it does provide for a more performant PXC cluster.  As a TAM, I have seen geographically-distributed PXC result in countless incidents of a system down in production.  Even when the system doesn&#8217;t go down, it often slows down due to the latency issues.  Why take that performance impact on every transaction for a DR solution that you hope to never use when there is an alternative solution as has been proposed here?  Instead, you maybe could benefit from an alternative approach that provides an acceptable failover solution while improving performance day in and day out.</p>

Centralization Vs. Decentralization of DBA Teams

https://www.percona.com/blog/2019/10/07/centralization-vs-decentralization-of-dba-teams/

https://www.percona.com/blog/?p=62512

Centralization Vs. Decentralization of DBA Teams

Centralization Vs. Decentralization of DBA TeamsAs a Technical Account Manager (TAM), I have seen many of our clients adopt a decentralized DBA Team.  In many cases, this is an effort to better align the DBA Team with the Development Teams.  This is an admirable and logical goal.  As often happens, you often trade one set of challenges for another.

Centralized DBA Teams

First, let’s talk about the challenges of a centralized DBA Team.  Here, the DBAs are all on a single team which is likely separated by platform.  So, you often have a MySQL Team, Oracle Team, SQL Server Team, etc.  These teams usually report up to one manager and although they act somewhat independently by platform, there is also some level of standardization in documentation, reporting structure, procedure, etc.  There are a number of benefits to this approach and a few challenges.  One benefit is having standardization across the whole of the company for how a given technology is documented, deployed, managed, etc.  Of course, this can be a limiting factor as well since there is little opportunity for customization.  Everything tends to become “vanilla” in nature and everything looks alike; from a positive perspective, this is consistency.

Consistency can be very useful.  When new members are added to the team, all servers are essentially the same.  It really doesn’t matter what the application is, a new team member can become proficient very quickly across the whole of the infrastructure.  Lessons learned on one set of servers translates very well to another set of servers supporting a completely different application.

As noted above, however, this is also a challenge.  What happens when a particular application needs something different?  Breaking the norm is the antithesis of consistency and resistance is met from team leadership.

Another challenge is that often the DBAs are supporting so many different technologies that they are challenged to fully understand the application and how it works.   There are just too many applications to become intimately aware of each.  In this case, DBAs are often more reactive rather than being proactive and becoming advisers to the development teams.  This is quite common in larger enterprises that have many diverse applications.

Decentralized DBA Teams

To combat this, many enterprises adopted the decentralized model and break the DBA Teams up into smaller teams aligned closely with the development team.  This seems to make much more sense in many ways since the DBAs will be laser-focused on fewer applications and work much more closely with the developers to ensure an improved solution.

So, what is the issue with this approach?  There are always trade-offs with any approach.  If there were one clear winner, everyone would just use it.  One of the largest challenges I have seen as a TAM with decentralization has been the lack of standardization.  Each DBA team acts virtually independently from every other team.  Problems that once were solved once and for all are suddenly being faced in parallel by multiple teams.  As a result, teams are often “re-inventing the wheel” each time they are confronted by a challenge that may have already been resolved by another team.  Without strong internal communication, teams are wasting time looking for solutions that have already been found.

This is one of the most fulfilling aspects of my role as a TAM.  I am in the unique position of often meeting with multiple teams and socializing these solutions across teams.  I am often asked in meetings with my clients whether I have seen this issue with another team or even another client.  If the answer is affirmative, the next question is obviously about how they resolved it.  Experience wins the course here and provides significant improvement in time to resolution of the issue.

Another challenge I see pertains to consistency.  In decentralized teams, DBAs will sometimes move from team to team as demand for resources changes.  In such cases, new team members require significant time to get up to speed on systems as consistency has been compromised due to each team doing things differently.  Installations may be in different directories or folders on the servers, documentation may be better or worse, and so on.  With no centralized oversight of standards, moving to a new team can slow the process of getting the DBA up to speed.

Communication is Key

Whether you decide to keep a centralized DBA Team or to decentralize the team, some level of consistency and communication are critical.  If you chose to decentralize the DBA Team, be sure someone is acting as a centralized resource, such as a TAM, who is looking for patterns of issues and working to find proven solutions.

Top 5 Takeaways from Percona Live Europe 2019

https://www.percona.com/blog/2019/10/04/top-5-takeaways-from-percona-live-europe-2019/

https://www.percona.com/blog/?p=62561

Percona Live Europe Wrap Up

Percona Live Europe Wrap UpIt’s a wrap! Another Percona Live Europe is in the books and we want to thank everyone for making it an amazing event.

A special shout-out to this year’s sponsors: AWS, PlanetScale, Altinity, Galera Cluster, MySQL, Shannon Systems, Tarantool, Booking.com, Free Software Foundation, MariaDB, Open Source Initiative, HPCwire, Datanami, and Enterprise AI.

We saw lots of first-time attendees, and it was great to welcome them into the open source community. Many sessions were standing-room-only and the keynotes were, as usual, the highlight of the week.

Here are our key takeaways from the week:

#1. Monitoring & Management Got Better

We debuted v2 of our award-winning database monitoring tool, Percona Monitoring and Management, a single-pane-of-glass to monitor the performance of MySQL, MariaDB, MongoDB, and PostgreSQL database environments.

This version offers new performance and usability query improvements, new query analytics for PostgreSQL, new labeling with Leverage Standard (system-generated) and Custom tags, as well as a new administrative API.

“With users’ expectations for application speed and availability at an all-time high, companies require quicker and deeper insight into their database performance bottlenecks, a higher-level perspective of multiple systems they monitor, and the ability to monitor larger and more complex systems. PMM2 delivers on all these requirements and more,” said Peter Zaitsev, co-founder, and CEO of Percona. ~ DBTA

It can be used on-premises and in the cloud, and is compatible with major cloud providers such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure, with specific dashboards for AWS RDS and Amazon Aurora.

#2. Trusted Distribution

Percona Distribution for PostgreSQL was formally announced at PLE, offering the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together. It is an easy, yet powerful way to implement an enterprise-grade, fully open source PostgreSQL environment. Backed by Percona’s world-class support and engineering teams, it gives companies the peace of mind that these components are tested and configured to work together, with 24×7 support.

“Postgres is truly the most open of all of the most popular open source databases. So the licensing is very open, it’s very easy to contribute, it has a massive following and a massive base of contributors,” said Matt Yonkovit. “There’s a lot of awesome extensions, a lot of awesome features, a lot of awesome open source tools that are out in the ecosystem. And so there’s a lot to draw from, but the problem is, a lot of them don’t necessarily work together.”

“It was obvious that a lot of these components are already there, you just need to package them in a way that is designed to work together and minimize the bugs and the friction between these components, these extensions.”  ~ The New Stack

#3. By the Numbers

We shared the results from our 2019 Open Source Data Management Software Survey with the Percona Live Europe crowd, and the data shows some interesting trends:

  • Multi-database, multi-cloud, and hybrid are not only the reality but commonplace, with over 92% of respondents say they use more than one database.
  • Environments are getting increasingly more complex, and the larger the organization, the more complex the hosting environment.
  • The top two benefits of open source are cost savings (79.4%) and avoiding vendor lock-in (62%), but the benefit of having a community also scored over 50%.

Here’s what the press had to say about the survey:

“Companies around the world prefer to have multiple databases in multiple locations over multiple platforms, according to a report unveiled at the Percona Live 2019 event in Amsterdam,” Mayank Sharma, TechRadar

According to Ian Murphy, Enterprise Times, the Percona Open Source Data Management Survey results showed, “When Oracle bought Sun and acquired MySQL, there was an openly voiced concern for the future of open source databases. That has long been put to bed and with the rise of cloud, there are more open source based options than pure commercial ones.”

On making the results of Percona’s Open Source Data Management Survey raw data available to anyone: “We want to provide the community with a way to get this data, and use it and to make all open source databases better,” says Matt Yonkovit Percona’s Chief Experience Officer ~ Tech Radar

#4. Simple or Complex?

Matt may be biased, but his keynote about how efforts in simplicity often lead to more complexity was an important talk for our community. Even with automation, DevOps processes, and the emergence of new technology, systems still crash, databases are still breached, and “smart people still do stupid things.”

So… what can we do about it?

“We’ve strived for a long time to try and reduce the complexity of our corporate systems, especially around the database space. And what has been happening is the exact opposite of a simplification, it’s been making it more complex, it’s been making it more fractured, if you will,” said Matt Yonkovit, Percona Chief Experience Officer. ~ The New Stack

#5. The Open Source Community Rocks!

One of the best takeaways from this year’s Percona Live was just how great our community is. As mentioned in our survey results, “the benefit of having a community” was noted as a top reason to adopt open source by over 50% of our respondents—and we can see why!

From networking in the hallways to dinners and community events, the sharing and support among the community are always astounding.

You are what makes the open source community so great, and we thank you for attending and participating in Percona Live Europe. Be sure to check out our Database Performance Blog, our social media accounts (Twitter, Facebook, LinkedIn), and be on the lookout for information about Percona Live 2020 in Austin, May 18-20.

We look forward to seeing you soon!

Percona XtraDB Cluster 8.0 (experimental release) : SST Improvements

https://www.percona.com/blog/2019/10/04/percona-xtradb-cluster-8-0-experimental-release-sst-improvements/

https://www.percona.com/blog/?p=62177

xtradb sst improvements

xtradb sst improvementsStarting with the experimental release of Percona XtraDB Cluster 8.0, we have made changes to the SST process to make the process more robust and easier to use.

  • mysqldump and rsync are no longer supported SST methods.

    Support for mysqldump was deprecated starting with PXC 5.7 and has now been completely removed.

    MySQL 8.0 introduced a new Redo Log format that limited the use of rsync while upgrading from PXC 5.7 to 8.0. In addition, the new Galera-4 also introduced changes that further limits the use of rsync.

    The only supported SST method is xtrabackup-v2.

  • A separate Percona XtraBackup installation is no longer required.

    The required Percona XtraBackup (PXB) binaries are now shipped as part of PXC 8.0, they are not installed for general use. So if you want to use PXB outside of an SST, you will have to install PXB separately.

  • SST logging now uses MySQL error logging

    Previously, the SST script would write directly to the error log file. Now, the SST script uses MySQL error logging. A side effect of this change is that the SST logs are not immediately visible. This is due to the logging subsystem being initialized after the SST has completed.

  • The wsrep_sst_auth variable has been removed.

    PXC 8.0 now creates an internal user (mysql.pxc.sst.user) with a random password for use by PXB to take the backup. The cleartext of the password is not saved and the user is deleted after the SST has completed.

    (This feature is still in development and may change before PXC 8.0 GA)

  • PXC SST auto-upgrade

    When PXC 8.0 detects that the SST came from a lower version, mysql_upgrade is automatically invoked. Also “RESET SLAVE ALL” is run on the new node if needed. This is invoked when receiving an SST from PXC 5.7 and PXC 8.0.

    (This feature is still in development and may change before PXC 8.0 GA)

Help us improve our software quality by reporting any bugs you encounter using our 

[Error: Irreparable invalid markup ('<a [...] noreferrer">') in entry. Owner must fix manually. Raw contents below.]

<p class="ljsyndicationlink"><a href="https://www.percona.com/blog/2019/10/04/percona-xtradb-cluster-8-0-experimental-release-sst-improvements/">https://www.percona.com/blog/2019/10/04/percona-xtradb-cluster-8-0-experimental-release-sst-improvements/</a></p><p class="ljsyndicationlink"><a href="https://www.percona.com/blog/?p=62177">https://www.percona.com/blog/?p=62177</a></p><img width="200" height="112" src="https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-200x112.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="xtradb sst improvements" style="display: block; margin-bottom: 5px; clear:both;max-width: 100%;" link_thumbnail="" srcset="https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-200x112.jpg 200w, https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-300x168.jpg 300w, https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-1024x572.jpg 1024w, https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-367x205.jpg 367w, https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements.jpg 1200w" sizes="(max-width: 200px) 100vw, 200px" /><p><img class="alignright size-medium wp-image-62555" src="https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-300x168.jpg" alt="xtradb sst improvements" width="300" height="168" srcset="https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-300x168.jpg 300w, https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-200x112.jpg 200w, https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-1024x572.jpg 1024w, https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements-367x205.jpg 367w, https://www.percona.com/blog/wp-content/uploads/2019/09/xtradb-sst-improvements.jpg 1200w" sizes="(max-width: 300px) 100vw, 300px" />Starting with the experimental release of <a target="_blank" href="https://www.percona.com/blog/2019/10/01/experimental-binary-of-percona-xtradb-cluster-8-0/">Percona XtraDB Cluster 8.0</a>, we have made changes to the SST process to make the process more robust and easier to use.</p> <ul> <li> <h4>mysqldump and rsync are no longer supported SST methods.</h4> <p>Support for mysqldump was deprecated starting with PXC 5.7 and has now been completely removed.</p> <p>MySQL 8.0 introduced a new Redo Log format that limited the use of rsync while upgrading from PXC 5.7 to 8.0. In addition, the new Galera-4 also introduced changes that further limits the use of rsync.</p> <p>The only supported SST method is xtrabackup-v2.</li> <li> <h4>A separate Percona XtraBackup installation is no longer required.</h4> <p>The required <a target="_blank" href="https://www.percona.com/software/mysql-database/percona-xtrabackup">Percona XtraBackup</a> (PXB) binaries are now shipped as part of PXC 8.0, they are not installed for general use. So if you want to use PXB outside of an SST, you will have to install PXB separately.</li> <li> <h4>SST logging now uses MySQL error logging</h4> <p>Previously, the SST script would write directly to the error log file. Now, the SST script uses MySQL error logging. A side effect of this change is that the SST logs are not immediately visible. This is due to the logging subsystem being initialized after the SST has completed.</li> <li> <h4>The <a target="_blank" href="https://www.percona.com/blog/2019/10/03/percona-xtradb-cluster-8-0-new-feature-wsrep_sst_auth-removal/">wsrep_sst_auth</a> variable has been removed.</h4> <p>PXC 8.0 now creates an internal user (mysql.pxc.sst.user) with a random password for use by PXB to take the backup. The cleartext of the password is not saved and the user is deleted after the SST has completed.</p> <p>(This feature is still in development and may change before PXC 8.0 GA)</li> <li> <h4>PXC SST auto-upgrade</h4> <p>When PXC 8.0 detects that the SST came from a lower version, mysql_upgrade is automatically invoked. Also “RESET SLAVE ALL” is run on the new node if needed. This is invoked when receiving an SST from PXC 5.7 and PXC 8.0.</p> <p>(This feature is still in development and may change before PXC 8.0 GA)</li> </ul> <p>Help us improve our software quality by reporting any bugs you encounter using our <a target="_blank" href="https://jira.percona.com/" target="_blank" rel="&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;nofollow noopener noreferrer" noopener noreferrer">bug tracking system</a>. As always, thanks for your continued support of Percona!</p>

Percona XtraDB Cluster 8.0 New Feature: wsrep_sst_auth Removal

https://www.percona.com/blog/2019/10/03/percona-xtradb-cluster-8-0-new-feature-wsrep_sst_auth-removal/

https://www.percona.com/blog/?p=62448

Experimental Binary XtraDB 8.0

Experimental Binary XtraDB 8.0The problem

In PXC 5.6 and 5.7, when using xtrabackup-v2 as the SST method, the DBA must create a user with the appropriate privileges for use by Percona XtraBackup (PXB). The username and password of this backup user are specified in the wsrep_sst_auth variable.

This is a problem because this username and password was being stored in plaintext and required that the configuration file be secured.

The PXC 8.0 solution

(This feature is still under development and may change before PXC 8.0 GA)

Because the wsrep_sst_auth is only needed on the donor side to take a backup, PXC 8.0 uses an internal user (created specifically for use by PXC) with a randomly generated password. Since this user is only needed on the donor, the plaintext password is not needed on the joiner node.

This password consists of 32 characters generated at random. A new password is generated for each SST request. The plaintext of the password is never saved and never leaves the node. The username/password is sent to the SST script via unnamed pipes (stdin).

New PXC internal user accounts

mysql.pxc.internal.session

The mysql.pxc.internal.session user account provides the appropriate security context to create and set up the other PXC accounts. This account has a limited set of privileges, enough needed to create the mysql.pxc.sst.user

.

This account is locked and cannot be used to login (the password field will not allow login).

mysql.pxc.sst.user

The mysql.pxc.sst.user is used by XtraBackup to perform the backup. This account has the full set of privileges needed by XtraBackup.

 This account is created for an SST and is dropped at the end of an SST and also when the PXC node is shutdown. The creation/provisioning of this user account is not written to the binlog and is not replicated to other nodes. However, this account is sent with the backup to the joiner node. So the joiner node also has to drop this user after the SST has finished.

mysql.pxc.sst.role

The mysql.pxc.sst.role is the MySQL role that provides the privileges needed for XtraBackup. This allows for easy addition/removal of privileges needed for an SST.

The experimental release of PXC is based on MySQL 8.0.15, and we have not implemented the role-based support due to issues found with MySQL 8.0.15. This will be revisited in future versions of PXC 8.0.

Program flow

  1. DONOR node receives SST request from the JOINER
  2. DONOR node generates a random password and creates the internal SST user
    SET SESSION sql_log_bin = OFF;
    DROP USER IF EXISTS 'mysql.pxc.sst.user'@localhost;
    CREATE USER 'mysql.pxc.sst.user'@localhost IDENTIFIED WITH 'mysql_native_password' BY 'XXXXXXXX' ACCOUNT LOCK;
    GRANT 'mysql.pxc.sst.role'@localhost TO 'mysql.pxc.sst.user'@localhost;
    SET DEFAULT ROLE 'mysql.pxc.sst.role'@localhost to 'mysql.pxc.sst.user'@localhost;
    ALTER USER 'mysql.pxc.sst.user'@localhost ACCOUNT UNLOCK;

    The code that uses role is not being used in the current release due to issues with MySQL 8.0.15. Currently, we create the user with all the permissions needed explicitly.
  3. Launch the SST script (passing the username/password via stdin)
  4. SST uses the username/password to perform the backup
  5. SST script exits
  6. The DONOR node drops the user.
  7. The JOINER node receives the backup and drops the user. Note that the JOINER node also contains the internal SST user!

As a precaution, the user is also dropped when the server is shutdown.

Experimental Binary of Percona XtraDB Cluster 8.0

https://www.percona.com/blog/2019/10/01/experimental-binary-of-percona-xtradb-cluster-8-0/

https://www.percona.com/blog/?p=62158

Experimental Binary XtraDB 8.0

Experimental Binary XtraDB 8.0Percona is happy to announce the first experimental binary of Percona XtraDB Cluster 8.0 on October 1, 2019. This is a major step for tuning Percona XtraDB Cluster to be more cloud- and user-friendly. This release combines the updated and feature-rich Galera 4, with substantial improvements made by our development team.

Improvements and New Features

Galera 4, included in Percona XtraDB Cluster 8.0, has many new features. Here is a list of the most essential improvements:

  • Streaming replication supports large transactions
  • The synchronization functions allow action coordination (wsrep_last_seen_gtid, wsrep_last_written_gtid, wsrep_sync_wait_upto_gtid)
  • More granular and improved error logging. wsrep_debug is now a multi-valued variable to assist in controlling the logging, and logging messages have been significantly improved.
  • Some DML and DDL errors on a replicating node can either be ignored or suppressed. Use the wsrep_ignore_apply_errors variable to configure.
  • Multiple system tables help find out more about the state of the cluster state.
  • The wsrep infrastructure of Galera 4 is more robust than that of Galera 3. It features a faster execution of code with better state handling, improved predictability, and error handling.

Percona XtraDB Cluster 8.0 has been reworked in order to improve security and reliability as well as to provide more information about your cluster:

  • There is no need to create a backup user or maintain the credentials in plain text (a security flaw). An internal SST user is created, with a random password for making a backup, and this user is discarded immediately once the backup is done.
  • Percona XtraDB Cluster 8.0 now automatically launches the upgrade as needed (even for minor releases). This avoids manual intervention and simplifies the operation in the cloud.
  • SST (State Snapshot Transfer) rolls back or fixes an unwanted action. It is no more “a copy only block” but a smart operation to make the best use of the copy-phase.
  • Additional visibility statistics are introduced in order to obtain more information about Galera internal objects. This enables easy tracking of the state of execution and flow control.

Installation

You can only install this release from a tarball and it, therefore, cannot be installed through a package management system, such as apt or yum. Note that this release is not ready for use in any production environment.

Percona XtraDB Cluster 8.0 is based on the following:

Please be aware that this release will not be supported in the future, and as such, neither the upgrade to this release nor the downgrade from higher versions is supported.

This release is also packaged with Percona XtraBackup 8.0.5. All Percona software is open-source and free.

In order to experiment with Percona XtraDB Cluster 8.0 in your environment, download and unpack the tarball for your platform.

Note

Be sure to check your system and make sure that the packages are installed which Percona XtraDB Cluster 8.0 depends on.

For Debian or Ubuntu:

$ sudo apt-get install -y \
socat libdbd-mysql-perl \
rsync libaio1 libc6 libcurl3 libev4 libgcc1 libgcrypt20 \
libgpg-error0 libssl1.1 libstdc++6 zlib1g libatomic1

For Red Hat Enterprise Linux or CentOS:

$ sudo yum install -y openssl socat  \
procps-ng chkconfig procps-ng coreutils shadow-utils \
grep libaio libev libcurl perl-DBD-MySQL perl-Digest-MD5 \
libgcc rsync libstdc++ libgcrypt libgpg-error zlib glibc openssl-libs

Help us improve our software quality by reporting any bugs you encounter using our 

[Error: Irreparable invalid markup ('<a [...] noreferrer">') in entry. Owner must fix manually. Raw contents below.]

<p class="ljsyndicationlink"><a href="https://www.percona.com/blog/2019/10/01/experimental-binary-of-percona-xtradb-cluster-8-0/">https://www.percona.com/blog/2019/10/01/experimental-binary-of-percona-xtradb-cluster-8-0/</a></p><p class="ljsyndicationlink"><a href="https://www.percona.com/blog/?p=62158">https://www.percona.com/blog/?p=62158</a></p><img width="200" height="105" src="https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-200x105.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="Experimental Binary XtraDB 8.0" style="display: block; margin-bottom: 5px; clear:both;max-width: 100%;" link_thumbnail="" srcset="https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-200x105.jpg 200w, https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-300x157.jpg 300w, https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-1024x536.jpg 1024w, https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-1140x595.jpg 1140w, https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-367x192.jpg 367w" sizes="(max-width: 200px) 100vw, 200px" /><p><img class="alignright size-medium wp-image-62394" src="https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-300x157.jpg" alt="Experimental Binary XtraDB 8.0" width="300" height="157" srcset="https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-300x157.jpg 300w, https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-200x105.jpg 200w, https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-1024x536.jpg 1024w, https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-1140x595.jpg 1140w, https://www.percona.com/blog/wp-content/uploads/2019/09/Experimental-Binary-XtraDB-8.0-367x192.jpg 367w" sizes="(max-width: 300px) 100vw, 300px" />Percona is happy to announce the first experimental binary of <em>Percona XtraDB Cluster 8.0 </em>on October 1, 2019. This is a major step for tuning Percona XtraDB Cluster to be more cloud- and user-friendly. This release combines the updated and feature-rich Galera 4, with substantial improvements made by our development team.</p> <h2>Improvements and New Features</h2> <p><em>Galera 4, included in Percona XtraDB Cluster 8.0, has many new features. Here is a list of the most essential improvements:</em></p> <div class="gs"> <div class=""> <div id=":18s" class="ii gt adO"> <div id=":15i" class="a3s aXjCH "> <div dir="ltr"> <ul> <li>Streaming replication supports large transactions</li> <li>The synchronization functions allow action coordination (<em>wsrep_last_seen_gtid</em>, <em>wsrep_last_written_gtid</em>, <em>wsrep_sync_wait_upto_gtid</em>)</li> <li>More granular and improved error logging. <em>wsrep_debug</em> is now a multi-valued variable to assist in controlling the logging, and logging messages have been significantly improved.</li> <li>Some DML and DDL errors on a replicating node can either be ignored or suppressed. Use the <em>wsrep_ignore_apply_errors</em> variable to configure.</li> <li>Multiple system tables help find out more about the state of the cluster state.</li> <li>The <em>wsrep</em> infrastructure of <em>Galera 4</em> is more robust than that of Galera 3. It features a faster execution of code with better state handling, improved predictability, and error handling.<i><br /> </i></li> </ul> <p><em>Percona XtraDB Cluster 8.0 has been reworked in order to improve security and reliability as well as to provide more information about your cluster:</em></p> <ul> <li>There is no need to create a backup user or maintain the credentials in plain text (a security flaw). An <em>internal SST user</em> is created, with a random password for making a backup, and this user is discarded immediately once the backup is done.</li> <li><em>Percona XtraDB Cluster 8.0</em> now automatically launches the upgrade as needed (even for minor releases). This avoids manual intervention and simplifies the operation in the cloud.</li> <li><em>SST</em> (State Snapshot Transfer) rolls back or fixes an unwanted action. It is no more <em>&#8220;a copy only block&#8221;</em> but a smart operation to make the best use of the copy-phase.</li> <li>Additional visibility statistics are introduced in order to obtain more information about <em>Galera</em> internal objects. This enables easy tracking of the state of execution and flow control.</li> </ul> </div> </div> </div> </div> </div> <h2>Installation</h2> <p>You can only install this release from a tarball and it, therefore, cannot be installed through a package management system, such as apt or yum. <strong>Note that this release is not ready for use in any production environment</strong>.</p> <p>Percona XtraDB Cluster 8.0 is based on the following:</p> <ul> <li><a target="_blank" href="https://www.percona.com/software/mysql-database/percona-server">Percona Server for MySQL 8.0.15-5</a></li> <li>Codership WSREP API release 27</li> <li>Codership Galera library 4.2</li> </ul> <p>Please be aware that this release will not be supported in the future, and as such, neither the upgrade to this release nor the downgrade from higher versions is supported.</p> <p>This release is also packaged with <a target="_blank" href="https://www.percona.com/doc/percona-xtrabackup/8.0/release-notes/8.0/8.0.5.html">Percona XtraBackup 8.0.5</a>. All Percona software is open-source and free.</p> <p>In order to experiment with Percona XtraDB Cluster 8.0 in your environment, <a target="_blank" href="https://www.percona.com/downloads/TESTING/Percona-XtraDB-Cluster-8.0/">download and unpack the tarball for your platform</a>.</p> <blockquote> <h5>Note</h5> <p>Be sure to check your system and make sure that the packages are installed which Percona XtraDB Cluster 8.0 depends on.</p> <p><em>For Debian or Ubuntu:</em></p> <p><code>$ sudo apt-get install -y \<br /> socat libdbd-mysql-perl \<br /> rsync libaio1 libc6 libcurl3 libev4 libgcc1 libgcrypt20 \<br /> libgpg-error0 libssl1.1 libstdc++6 zlib1g libatomic1</code></p> <p><em>For Red Hat Enterprise Linux or CentOS:</em></p> <p><code>$ sudo yum install -y openssl socat  \<br /> procps-ng chkconfig procps-ng coreutils shadow-utils \<br /> grep libaio libev libcurl perl-DBD-MySQL perl-Digest-MD5 \<br /> libgcc rsync libstdc++ libgcrypt libgpg-error zlib glibc openssl-libs</code></p></blockquote> <p>Help us improve our software quality by reporting any bugs you encounter using our <a target="_blank" href="https://jira.percona.com/" target="_blank" rel="&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;&quot;nofollow noopener noreferrer" noopener="noopener" noreferrer"="noreferrer&quot;">bug tracking system</a>. As always, thanks for your continued support of Percona!</p>

Avoid Vendor Lock-in with Percona Backup for MongoDB

https://www.percona.com/blog/2019/09/30/avoid-vendor-lock-in-with-percona-backup-for-mongodb/

https://www.percona.com/blog/?p=62460

Percona Backup for MongoDB

Percona Backup for MongoDBPercona Backup for MongoDB v1 is the first GA version of our new MongoDB backup tool. This has been custom-built to assist those users who don’t want to pay for MongoDB Enterprise and Ops Manager but do want a fully-supported community backup tool that can perform cluster-wide consistent backups in MongoDB.

Please visit our webpage to download the latest version of this software.

In a nutshell, what can it do?

Currently, Percona Backup for MongoDB is designed to give you an easy command-line interface which allows you to perform a consistent backup/restore of clusters and non-sharded replica sets. It uses S3 (or S3-compatible) object storage for the remote store. Percona Backup for MongoDB can improve your cluster backup consistency compared to the filesystem snapshot method, and can save you time and effort if you are implementing MongoDB backups for the first time.

Why did we create Percona Backup for MongoDB?

Many people in the community and within our customer base told us that they wanted a tool that could easily back up and restore clusters. This feature was something they felt “locked” them into MongoDB’s enterprise subscription.

Percona is anti-vendor-lock-in and strives to create and provide tools, software, and services to help our customers achieve the freedom they need to use, manage, and move their data easily. We will continue to develop and add new features to Percona Backup for MongoDB, enabling users to have more freedom of choice when selecting a backup software provider.

Couldn’t I do all this myself?

It is possible to build your own scripts and tools which enable you to perform consistent backups across a cluster; in fact, many in the community have done this. But, not all users and enterprises have the technical skill or knowledge required to build something that they can feel confident will consistently back up their databases. Many users are looking for a fully-supported, community-driven tool to fill in the gaps. This is especially important given the steady evolution of MongoDB’s replication and sharding protocols. Keeping up with new features and code can be challenging for DBAs, who usually also have a range of additional responsibilities to meet.

What other features are coming in the future?

Percona Backup for MongoDB v1 met the original goal laid out by the community; to create a tool that can create a consistent backup for clusters as easily as mongodump does for a non-sharded single replica set. On top of that the restore is as simple as “pbm list” + “pbm restore <backup-you-want>”.

However, there is still a lot more we plan to do in order to extend and enhance this tool. Our short-term feature roadmap includes:

  • Point in time restores
  • Better User Interface: Additional Status and Logging
  • Distributed transaction handling

Help us make it better!

We would love your help in making this tool even better! Yes, we accept code contributions. We know many of you have already solved tricky issues around backup in MongoDB and would appreciate any contributions you have which would improve Percona Backup for MongoDB. We would also love to hear what features you think we need to include in future versions.

For more details on our commitment to MongoDB and our latest software, please visit our website.