Going to take another stab at this...
We have a development environmentâmade up of VMsâfor developing and testing
the deployment tools for a particular service that depends on cephfs for
sharing state data between hosts. In production we will be using filestore
OSDs because of the very low volume of data (a few hundred kilobytes) and
the very low rate of change. There's insufficient performance benefit for
it to make sense for us to create an operational exception by configuring
the hardware differently from everything else just to have separate block
devices.
Unfortunately, even though the documentation says that filestore OSDs are
well tested and supported, they don't seem to be well documented.
In a recent test of our deployment tools (using Kraken on Centos/7) the
'ceph-deploy osd' steps failed. Assuming this was simply because Kraken is
now so far past EOL that it just wasn't supported properly on an updated
Centos box I started working on an update to Luminos. However, I've since
discovered that the problem is actually that ceph-deploy's OSD 'prepare'
and 'activate' commands have been deprecated regardless of ceph release. I
now realize that ceph-deploy is maintained independently from the rest of
ceph, but not documented independently, so the ceph documentation that
references ceph-deploy seems to now be frequently incorrect.
Except where mentioned otherwise, the rest of this is using the latest
Luminos from the download.ceph.com Yum archive (12.2.10) with ceph-deploy
2.0.1
Our scripts, written for Kraken, were doing this to create filestore OSDs
on four dev VMs:
ceph-deploy osd prepare tldhost01:/var/local/osd0 tldhost02:/var/local/osd0
tldhost03:/var/local/osd0 tldhost04:/var/local/osd0
ceph-deploy osd activate tldhost01:/var/local/osd0
tldhost02:/var/local/osd0 tldhost03:/var/local/osd0
tldhost04:/var/local/osd0
Both 'prepare' and 'activate' seem to be completely deprecated now (neither
shows up in the help output generated when the above commands fail) in
Kraken and Luminos. This seems to have changed in the last 60 days or so.
The above commands now fail with this error:
usage: ceph-deploy osd [-h] {list,create} ...
ceph-deploy osd: error: argument subcommand: invalid choice: 'prepare'
(choose from 'list', 'create')
I'm trying to figure out the 'ceph-deploy osd create' syntax to duplicate
the above, but the documentation is no help. The Luminos documentation
still shows the above prepare/activate syntax should be valid, and
continues to show the journal path as being optional for the 'ceph-deploy
osd create' command.
<
http://docs.ceph.com/docs/luminous/rados/deployment/ceph-deploy-osd/#prepare-osds
.
The same documentation for Mimic seems to be updated for the new
ceph-deploy syntax, including the elimination of 'prepare' and 'activate',
but doesn't include specifics for a filestore deployment:
<
http://docs.ceph.com/docs/mimic/rados/deployment/ceph-deploy-osd/#create-osds
The new syntax seems to suggest I can now only do one host at a time, and
must split up the host, data, and journal values. After much trial and
error I've also found it's now required to specify the journal path, but
not knowing for sure what ceph-deploy was doing in the background with the
journal path by default before, I've had a hard time sorting out things to
try with the new syntax. Following the above logic, and skipping over a
few things I've tried to get here, in my latest attempt I've moved the ceph
data down one level in the directory tree and added a journal directory.
Where tldhost01 is localhost:
mkdir -p /var/local/ceph/{osd0,journal}
ceph-deploy osd create --data /var/local/ceph/osd0 --journal
/var/local/ceph/journal --filestore tldhost01
The assumption in this is that --data and --journal accept filesystem paths
the same way the 'prepare' and 'activate' commands used to, but that is
clearly not the case, as the above complains that I have not supplied block
devices. It looks like --filestore is not doing what I hoped.
At this point I'm stuck. I've gone through all the documentation I can
find, and although it frequently mentions that ceph started by storing its
data on the filesystem and that doing so is still well supported, I can't
actually find any documentation that says how to do it. When we started
this project we used information from the quickstart documents to get
filestore OSDs set up, but even the quickstart documents don't seem to
supply that information (anymore).
Thanks for any pointers anyone can supply.
Matt