Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a distributed file system. 🦑
We're excited to announce our 16th stable release: Ceph Pacific v16.2.0! The release and upgrade notes are now available. Thank you to all the Ceph contributors!
Thank you to our attendees, speakers and sponsors for making
#Cephalocon
Barcelona a great success! We will have live videos and slides from all the great content available soon.
We are sorry the announce that due to the recent coronavirus outbreak we are canceling Cephalocon in Seoul. We are still evaluating whether it makes sense to reschedule for later this year.
.
@CERN
gives their update of using Ceph in various applications: 6PB cache for a 300PB tape archival, and a hyper-converged 250TB CephFS cluster for a new email solution. Slides are now available!
#CephDays
And that’s a wrap here at Ceph Day CERN. Slides are now available; videos will be announced once they’re posted. Thank you again to our sponsors:
@SoftIron
,
@westerndigital
, the Ceph Foundation, and of course
@CERN
for hosting us.
#CephDays
What if I told you Ceph could autotune your placement group worries away? Sage Weil (
@liewegas
) gives an overview of how to merge your placement groups, or let Ceph completely autotune them with the new Nautilus release.
Hear Sage Weil give a live update on the development of the Pacific Release. The stream starts on February 25th at 17:00 UTC / 18:00 CET / 12 PM EST / 9 AM PST
We are very excited to announce that we have reached the 1 exabyte milestone of community Ceph clusters via telemetry! Thank you to everyone who is opted-in!
#ceph
#telemetry
#powerofopensource
Ceph is now declared to have stable support with
@rook_io
v0.9! Deploy versions Luminous, Mimic or give Nautilus development a try. RBD mirroring and RGW user creation features added and more!
Today we released Rook v0.9 expanding support for additional storage providers (
@cassandra
,
@nexenta
& NFS servers) and
@Ceph
Support is now declared stable,
#KubeCon
Satoru Takeuchi, Cybozu, will be speaking on Best Practices of Production-Grade Rook/Ceph Cluster at
#Cephalocon
Amsterdam April 16-18 co-located with
#KubeCon
Cephers in Europe! We are proud to announce the Ceph Day Germany which happens on February 7 at the Deutsche Telekom AG Office in Darmstadt! The schedule is available and the registration is open! See you there!
#ceph
Which failed device do you remove in a massive distributed storage cluster? In Ceph's new Nautilus release, device management becomes easier with identifying physical disks and locating them by node. Blinking lights and more, read all about it:
Thousands of options are available between Ceph, RocksDB, and the Linux kernel to improve performance and efficiency. With so many solutions posted around, what do you choose and avoid? Ceph performance expert Mark Nelson gives the history and what to use
Hi Scale attendees! Please stop by our booth and get a Pacific release shirt before they're all gone! There are new Ceph gold enamel pins to add to your collection. Also, take advantage of speaking to Ceph experts throughout the day!
#SCALE19X
Thanks to
@digitalocean
💙 for supporting us by not only sponsoring
#Cephalocon
but also serving on our foundation board! For more information on how they’re revolutionizing the ☁️
check out ->
Hi
#SUSECON
! We had a great time meeting you all at the sponsor party, and Dolly Parton. Come learn about the open source project Ceph and how it’s enabling big storage users to get the job done. Also, come by for a free shirt and stickers!
Don't guess when you're going to have device failures in your storage cluster, let Ceph's new prediction failure tell you ahead of time. New in Nautilus,
@liewegas
speaks on how it works and what the future development holds.
It's official! Cephalocon 2022 will be taking place April 5-7, 2022, in Portland, Oregon + Virtual! CFP is now open, so don't delay. Registration will be available soon!
.
@laura_flowers6
,
@ojha_neha
, and
@_Vikhyat
introduce three large-scale Ceph test clusters of 1000+ OSDS to validate the latest release Quincy and its new features, performance, and resilience. Here are the results.
Hear
@leseb_
speak on deploying and managing your Ceph installation with
@rook_io
for your
#Kubernetes
environment at 11:30 at
#RHSummit
community central container theater in expo hall A. Come get one these sweet Ceph lunchboxes afterwards!
What a great first day for
#Cephalocon
Barcelona! Please join us tomorrow for keynotes starting at 9am. We will have a Town Hall panel with the Ceph component leads at 10:15 to answer your questions that you can submit now:
We're seeking mentors to propose projects for
@Outreachy
interns. Outreachy is a paid, remote internship program that helps people traditionally underrepresented in tech make their first contributions to Free and Open Source Software (FOSS) communities.
🦑Who's ready for in-person Ceph Days to return?🦑
Save the date for Ceph Day Dublin on September 13th!
Registration is free - save your spot before all tickets are gone.
The CFP is now open until August 17th.
Last week at Ceph Days NYC, we were proud to bring our global community together to celebrate all things
#Ceph
. We would like to thank all members and contributors in our community for an incredible event, with a special shoutout to event sponsors Bloomberg and Clyso.
Hi
#FOSDEM
! Hope you all had a good first day and night! Last day to meet us at the
@CentOSProject
booth! Grab some stickers and talk to us about future development and events coming up!
#FOSDEM2020