Discussion:
[ceph-users] Stale pg_upmap_items entries after pg increase
Rene Diepstraten
2018-11-20 16:26:33 UTC
Permalink
Hi.

Today I've been looking at upmap and the balancer in upmap mode.
The balancer has run previously in upmap mode and today, after
expansion, I have increased the pgs of two pools.

I found that there are pg_upmap_items that redirect from osds that are
not active for the pg:

See this pg, which has an upmap redirect from osd.6 to osd.14
```
***@mon01:~# ceph osd dump | grep upmap | grep -w '2\.81'
pg_upmap_items 2.81 [6,14]
```

The pg actually is present on other osds:
```
***@mon01:~# ceph pg dump | awk '/^2\.81/ {printf "PG %s is active on
osds %s\n", $1, $15}'
dumped all
PG 2.81 is active on osds [39,30,51]
```

The pg 2.81 is active+clean , so there's no reference to osd.6 or osd.14
anywhere.

Is this expected behaviour? Is there any way to 'cleanup' the upmap
entries to remove these stale ones?

Thanks in advance.


Kind regards,

René Diepstraten
PCextreme B.V.
Dan van der Ster
2018-11-20 16:37:08 UTC
Permalink
I've noticed the same and have a script to help find these:

https://github.com/cernceph/ceph-scripts/blob/master/tools/clean-upmaps.py

-- dan

On Tue, Nov 20, 2018 at 5:26 PM Rene Diepstraten <***@pcextreme.nl> wrote:
>
> Hi.
>
> Today I've been looking at upmap and the balancer in upmap mode.
> The balancer has run previously in upmap mode and today, after
> expansion, I have increased the pgs of two pools.
>
> I found that there are pg_upmap_items that redirect from osds that are
> not active for the pg:
>
> See this pg, which has an upmap redirect from osd.6 to osd.14
> ```
> ***@mon01:~# ceph osd dump | grep upmap | grep -w '2\.81'
> pg_upmap_items 2.81 [6,14]
> ```
>
> The pg actually is present on other osds:
> ```
> ***@mon01:~# ceph pg dump | awk '/^2\.81/ {printf "PG %s is active on
> osds %s\n", $1, $15}'
> dumped all
> PG 2.81 is active on osds [39,30,51]
> ```
>
> The pg 2.81 is active+clean , so there's no reference to osd.6 or osd.14
> anywhere.
>
> Is this expected behaviour? Is there any way to 'cleanup' the upmap
> entries to remove these stale ones?
>
> Thanks in advance.
>
>
> Kind regards,
>
> René Diepstraten
> PCextreme B.V.
> _______________________________________________
> ceph-users mailing list
> ceph-***@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Rene Diepstraten
2018-11-20 21:21:19 UTC
Permalink
Thanks very much, I can use this.

It would be nice if the balancer module had functionality to
check/cleanup these stale entries.

I may create an issue for this.


On 20/11/2018 17:37, Dan van der Ster wrote:
> I've noticed the same and have a script to help find these:
>
> https://github.com/cernceph/ceph-scripts/blob/master/tools/clean-upmaps.py
>
> -- dan
>
> On Tue, Nov 20, 2018 at 5:26 PM Rene Diepstraten <***@pcextreme.nl> wrote:
>>
>> Hi.
>>
>> Today I've been looking at upmap and the balancer in upmap mode.
>> The balancer has run previously in upmap mode and today, after
>> expansion, I have increased the pgs of two pools.
>>
>> I found that there are pg_upmap_items that redirect from osds that are
>> not active for the pg:
>>
>> See this pg, which has an upmap redirect from osd.6 to osd.14
>> ```
>> ***@mon01:~# ceph osd dump | grep upmap | grep -w '2\.81'
>> pg_upmap_items 2.81 [6,14]
>> ```
>>
>> The pg actually is present on other osds:
>> ```
>> ***@mon01:~# ceph pg dump | awk '/^2\.81/ {printf "PG %s is active on
>> osds %s\n", $1, $15}'
>> dumped all
>> PG 2.81 is active on osds [39,30,51]
>> ```
>>
>> The pg 2.81 is active+clean , so there's no reference to osd.6 or osd.14
>> anywhere.
>>
>> Is this expected behaviour? Is there any way to 'cleanup' the upmap
>> entries to remove these stale ones?
>>
>> Thanks in advance.
>>
>>
>> Kind regards,
>>
>> René Diepstraten
>> PCextreme B.V.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-***@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
x***@zte.com.cn
2018-11-21 06:39:36 UTC
Permalink
I've sent a pr(https://github.com/ceph/ceph/pull/25196) for the issue below, which might help.



















原始邮件



发件人ReneDiepstraten <***@pcextreme.nl>
收件人Dan van der Ster <***@vanderster.com>;
抄送人ceph-users <ceph-***@lists.ceph.com>;
日 期 2018幎11月21日 05:26
äž» 题 Re: [ceph-users] Stale pg_upmap_items entries after pg increase


Thanks very much, I can use this.

It would be nice if the balancer module had functionality to
check/cleanup these stale entries.

I may create an issue for this.


On 20/11/2018 17:37, Dan van der Ster wrote:
> I've noticed the same and have a script to help find these:
>
> https://github.com/cernceph/ceph-scripts/blob/master/tools/clean-upmaps.py
>
> -- dan
>
> On Tue, Nov 20, 2018 at 5:26 PM Rene Diepstraten <***@pcextreme.nl> wrote:
>>
>> Hi.
>>
>> Today I've been looking at upmap and the balancer in upmap mode.
>> The balancer has run previously in upmap mode and today, after
>> expansion, I have increased the pgs of two pools.
>>
>> I found that there are pg_upmap_items that redirect from osds that are
>> not active for the pg:
>>
>> See this pg, which has an upmap redirect from osd.6 to osd.14
>> ```
>> ***@mon01:~# ceph osd dump | grep upmap | grep -w '2\.81'
>> pg_upmap_items 2.81 [6,14]
>> ```
>>
>> The pg actually is present on other osds:
>> ```
>> ***@mon01:~# ceph pg dump | awk '/^2\.81/ {printf "PG %s is active on
>> osds %s\n", $1, $15}'
>> dumped all
>> PG 2.81 is active on osds [39,30,51]
>> ```
>>
>> The pg 2.81 is active+clean , so there's no reference to osd.6 or osd.14
>> anywhere.
>>
>> Is this expected behaviour? Is there any way to 'cleanup' the upmap
>> entries to remove these stale ones?
>>
>> Thanks in advance.
>>
>>
>> Kind regards,
>>
>> René Diepstraten
>> PCextreme B.V.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-***@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-***@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Loading...