Discussion:
[ceph-users] PERC H710 raid card
Robert Fantini
2014-07-16 20:55:21 UTC
Permalink
I've 2 dell systems with PERC H710 raid cards. Those are very good end
cards , but do not support jbod .

They support raid 0, 1, 5, 6, 10, 50, 60 .

lspci shows them as: LSI Logic / Symbios Logic MegaRAID SAS 2208
[Thunderbolt] (rev 05)

The firmware Dell uses on the card does not support jbod.

My question is how can this be best used for Ceph? Or should it not be used?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140716/e1066889/attachment.htm>
Andrey Korolyov
2014-07-16 21:01:03 UTC
Permalink
On Thu, Jul 17, 2014 at 12:55 AM, Robert Fantini
I've 2 dell systems with PERC H710 raid cards. Those are very good end cards
, but do not support jbod .
They support raid 0, 1, 5, 6, 10, 50, 60 .
lspci shows them as: LSI Logic / Symbios Logic MegaRAID SAS 2208
[Thunderbolt] (rev 05)
The firmware Dell uses on the card does not support jbod.
My question is how can this be best used for Ceph? Or should it not be used?
_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,

probably single-disk R0 configuration with writeback cache is a best
possible option.
Shain Miley
2014-07-16 20:55:44 UTC
Permalink
Robert,
We use those cards here in our Dell R-720 servers.

We just ended up creating a bunch of single disk RAID-0 units, since
there was no jbod option available.

Shain
Post by Robert Fantini
I've 2 dell systems with PERC H710 raid cards. Those are very good end
cards , but do not support jbod .
They support raid 0, 1, 5, 6, 10, 50, 60 .
lspci shows them as: LSI Logic / Symbios Logic MegaRAID SAS 2208
[Thunderbolt] (rev 05)
The firmware Dell uses on the card does not support jbod.
My question is how can this be best used for Ceph? Or should it not be used?
_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Shain Miley | Manager of Systems and Infrastructure, Digital Media |
smiley at npr.org | 202.513.3649
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140716/550c8c64/attachment.htm>
Dennis Kramer (DT)
2014-07-17 06:04:03 UTC
Permalink
Hi,

What do you recommend in case of a disk failure in this kind of
configuration? Are you bringing down the host when you replace the
disk and re-create the raid-0 for the replaced disk? I reckon that
linux doesn't automatically get the disk replacement either...

Dennis
Robert, We use those cards here in our Dell R-720 servers.
We just ended up creating a bunch of single disk RAID-0 units,
since there was no jbod option available.
Shain
Post by Robert Fantini
I've 2 dell systems with PERC H710 raid cards. Those are very
good end cards , but do not support jbod .
They support raid 0, 1, 5, 6, 10, 50, 60 .
lspci shows them as: LSI Logic / Symbios Logic MegaRAID SAS 2208
[Thunderbolt] (rev 05)
The firmware Dell uses on the card does not support jbod.
My question is how can this be best used for Ceph? Or should it not be used?
_______________________________________________ ceph-users
mailing list ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-- Shain Miley | Manager of Systems and Infrastructure, Digital
Media | smiley at npr.org | 202.513.3649
Jake Young
2014-07-17 21:11:57 UTC
Permalink
There are two command line tools for Linux for LSI cards: megacli and
storcli

You can do pretty much everything from those tools.

Jake
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
What do you recommend in case of a disk failure in this kind of
configuration? Are you bringing down the host when you replace the
disk and re-create the raid-0 for the replaced disk? I reckon that
linux doesn't automatically get the disk replacement either...
Dennis
Robert, We use those cards here in our Dell R-720 servers.
We just ended up creating a bunch of single disk RAID-0 units,
since there was no jbod option available.
Shain
Post by Robert Fantini
I've 2 dell systems with PERC H710 raid cards. Those are very
good end cards , but do not support jbod .
They support raid 0, 1, 5, 6, 10, 50, 60 .
lspci shows them as: LSI Logic / Symbios Logic MegaRAID SAS 2208
[Thunderbolt] (rev 05)
The firmware Dell uses on the card does not support jbod.
My question is how can this be best used for Ceph? Or should it not be used?
_______________________________________________ ceph-users
mailing list ceph-users at lists.ceph.com <javascript:;>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-- Shain Miley | Manager of Systems and Infrastructure, Digital
Media | smiley at npr.org <javascript:;> | 202.513.3649
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iEYEARECAAYFAlPHZ1MACgkQiJDTKUBxIRusogCeJ+jnADW/KBoQAxnDSz62yT3P
FNoAnin3A52AqiA+KlFJQoc5bdQRoyYe
=/MPE
-----END PGP SIGNATURE-----
_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com <javascript:;>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140717/56b0303d/attachment.htm>
Paul Santos
2014-07-16 21:02:04 UTC
Permalink
In my test cluster in systems with similar RAID cards, I create single-disk RAID-0 volumes.

That does the trick.



Paul



From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com] On Behalf Of Robert Fantini
Sent: Wednesday, July 16, 2014 1:55 PM
To: ceph-users at lists.ceph.com
Subject: [ceph-users] PERC H710 raid card



I've 2 dell systems with PERC H710 raid cards. Those are very good end cards , but do not support jbod .

They support raid 0, 1, 5, 6, 10, 50, 60 .

lspci shows them as: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05)

The firmware Dell uses on the card does not support jbod.

My question is how can this be best used for Ceph? Or should it not be used?




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140716/c64e19b5/attachment.htm>
Robert Fantini
2014-07-16 21:09:33 UTC
Permalink
thank you very much for the responses!
Post by Paul Santos
In my test cluster in systems with similar RAID cards, I create single-disk RAID-0 volumes.
That does the trick.
Paul
*From:* ceph-users [mailto:ceph-users-bounces at lists.ceph.com] *On Behalf
Of *Robert Fantini
*Sent:* Wednesday, July 16, 2014 1:55 PM
*To:* ceph-users at lists.ceph.com
*Subject:* [ceph-users] PERC H710 raid card
I've 2 dell systems with PERC H710 raid cards. Those are very good end
cards , but do not support jbod .
They support raid 0, 1, 5, 6, 10, 50, 60 .
lspci shows them as: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05)
The firmware Dell uses on the card does not support jbod.
My question is how can this be best used for Ceph? Or should it not be used?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140716/ac4ecd05/attachment.htm>
Loading...