Welcome! Log In Create A New Profile

Advanced

LRU mechanism question

Posted by siroga 
siroga
LRU mechanism question
July 07, 2010 01:10AM
Hi,
I just started playing with memcached. While doing very basic stuff I
found one thing that confused me a lot.
I have memcached running with default settings - 64M of memory for
caching.
1. Called flushALL to clean the cache.
2. insert 100 of byte arrays 512K each - this should consume about 51M
of memory so I should have enough space to keep all of them - and to
very that call get() for each of them - as expected all arrays are
present
3. I call flushAll again - so cache should be clear
4. insert 100 arrays of smaller size ( 256K). I also expected that I
have enough memory to store them (overall I need about 26M), but
surprisingly to me when calling get() only last 15 where found in the
cache!!!

It looks like memcached still hold memory occupied by first 100
arrays.
Memcache-top says that only 3.8M out of 64 used.

Any info/explanation on memcached memory management details is very
welcomed. Sorry if it is a well known feature, but I did not find much
on a wiki that would suggest explanation.

Regards,
Sergei

Here is my test program (I got the same result using both danga and
spy.memcached. clients):

MemCachedClient cl;

@Test
public void strange() throws Throwable
{
byte[] testLarge = new byte[1024*512];
byte[] testSmall = new byte[1024*256];
int COUNT = 100;
cl.flushAll();
Thread.sleep(1000);
for (int i = 0; i < COUNT; i++)
{
cl.set("largekey" + i, testLarge, 600);
}
for (int i = 0; i < COUNT; i++)
{
if (null != cl.get("largekey" + i))
{
System.out.println("First not null " + i);
break;
}
}
Thread.sleep(1000);
cl.flushAll();
Thread.sleep(1000);
for (int i = 0; i < COUNT; i++)
{
cl.set("smallkey" + i, testSmall, 600);
}
for (int i = 0; i < COUNT; i++)
{
if (null != cl.get("smallkey" + i))
{
System.out.println("First not null " + i);
break;
}
}

}
Matt Ingenthron
Re: LRU mechanism question
July 07, 2010 01:30AM
Hi Sergei,

For various reasons (performance, avoiding memory fragmentation),
memcached uses a memory allocation approach called slab allocation. The
memcached flavor of it can be found here:

http://code.google.com/p/memcached/wiki/MemcachedSlabAllocator

Chances are, your items didn't fit into the slabs defined. There are
some stats to see the details and you can potentially do some slab tuning.

Hope that helps,

- Matt

siroga wrote:
> Hi,
> I just started playing with memcached. While doing very basic stuff I
> found one thing that confused me a lot.
> I have memcached running with default settings - 64M of memory for
> caching.
> 1. Called flushALL to clean the cache.
> 2. insert 100 of byte arrays 512K each - this should consume about 51M
> of memory so I should have enough space to keep all of them - and to
> very that call get() for each of them - as expected all arrays are
> present
> 3. I call flushAll again - so cache should be clear
> 4. insert 100 arrays of smaller size ( 256K). I also expected that I
> have enough memory to store them (overall I need about 26M), but
> surprisingly to me when calling get() only last 15 where found in the
> cache!!!
>
> It looks like memcached still hold memory occupied by first 100
> arrays.
> Memcache-top says that only 3.8M out of 64 used.
>
> Any info/explanation on memcached memory management details is very
> welcomed. Sorry if it is a well known feature, but I did not find much
> on a wiki that would suggest explanation.
>
> Regards,
> Sergei
>
> Here is my test program (I got the same result using both danga and
> spy.memcached. clients):
>
> MemCachedClient cl;
>
> @Test
> public void strange() throws Throwable
> {
> byte[] testLarge = new byte[1024*512];
> byte[] testSmall = new byte[1024*256];
> int COUNT = 100;
> cl.flushAll();
> Thread.sleep(1000);
> for (int i = 0; i < COUNT; i++)
> {
> cl.set("largekey" + i, testLarge, 600);
> }
> for (int i = 0; i < COUNT; i++)
> {
> if (null != cl.get("largekey" + i))
> {
> System.out.println("First not null " + i);
> break;
> }
> }
> Thread.sleep(1000);
> cl.flushAll();
> Thread.sleep(1000);
> for (int i = 0; i < COUNT; i++)
> {
> cl.set("smallkey" + i, testSmall, 600);
> }
> for (int i = 0; i < COUNT; i++)
> {
> if (null != cl.get("smallkey" + i))
> {
> System.out.println("First not null " + i);
> break;
> }
> }
>
> }
>
dormando
Re: LRU mechanism question
July 07, 2010 01:40AM
Here's a more succinct and to the point page:

http://code.google.com/p/memcached/wiki/NewUserInternals
^ If your question isn't answered here ask for clarification and I'll
update the page.

Your problem is about the slab preallocation I guess.

On Tue, 6 Jul 2010, Matt Ingenthron wrote:

> Hi Sergei,
>
> For various reasons (performance, avoiding memory fragmentation), memcached
> uses a memory allocation approach called slab allocation. The memcached
> flavor of it can be found here:
>
> http://code.google.com/p/memcached/wiki/MemcachedSlabAllocator
>
> Chances are, your items didn't fit into the slabs defined. There are some
> stats to see the details and you can potentially do some slab tuning.
>
> Hope that helps,
>
> - Matt
>
> siroga wrote:
> > Hi,
> > I just started playing with memcached. While doing very basic stuff I
> > found one thing that confused me a lot.
> > I have memcached running with default settings - 64M of memory for
> > caching.
> > 1. Called flushALL to clean the cache.
> > 2. insert 100 of byte arrays 512K each - this should consume about 51M
> > of memory so I should have enough space to keep all of them - and to
> > very that call get() for each of them - as expected all arrays are
> > present
> > 3. I call flushAll again - so cache should be clear
> > 4. insert 100 arrays of smaller size ( 256K). I also expected that I
> > have enough memory to store them (overall I need about 26M), but
> > surprisingly to me when calling get() only last 15 where found in the
> > cache!!!
> >
> > It looks like memcached still hold memory occupied by first 100
> > arrays.
> > Memcache-top says that only 3.8M out of 64 used.
> >
> > Any info/explanation on memcached memory management details is very
> > welcomed. Sorry if it is a well known feature, but I did not find much
> > on a wiki that would suggest explanation.
> >
> > Regards,
> > Sergei
> >
> > Here is my test program (I got the same result using both danga and
> > spy.memcached. clients):
> >
> > MemCachedClient cl;
> >
> > @Test
> > public void strange() throws Throwable
> > {
> > byte[] testLarge = new byte[1024*512];
> > byte[] testSmall = new byte[1024*256];
> > int COUNT = 100;
> > cl.flushAll();
> > Thread.sleep(1000);
> > for (int i = 0; i < COUNT; i++)
> > {
> > cl.set("largekey" + i, testLarge, 600);
> > }
> > for (int i = 0; i < COUNT; i++)
> > {
> > if (null != cl.get("largekey" + i))
> > {
> > System.out.println("First not null " + i);
> > break;
> > }
> > }
> > Thread.sleep(1000);
> > cl.flushAll();
> > Thread.sleep(1000);
> > for (int i = 0; i < COUNT; i++)
> > {
> > cl.set("smallkey" + i, testSmall, 600);
> > }
> > for (int i = 0; i < COUNT; i++)
> > {
> > if (null != cl.get("smallkey" + i))
> > {
> > System.out.println("First not null " + i);
> > break;
> > }
> > }
> >
> > }
> >
>
>
Marc Bollinger
Re: LRU mechanism question
July 07, 2010 01:40AM
Sergei,

One more tidbit would be that doesn't appear in either of those links
(though I'm not sure it'd necessarily be super-appropriate in either)
that may throw off new users is that `flush`-based commands are
only invalidating objects, _not_ clearing the data store. The above
links should be enough to get you rolling, though.

- Marc

On Tue, Jul 6, 2010 at 4:32 PM, dormando <[email protected]> wrote:

> Here's a more succinct and to the point page:
>
> http://code.google.com/p/memcached/wiki/NewUserInternals
> ^ If your question isn't answered here ask for clarification and I'll
> update the page.
>
> Your problem is about the slab preallocation I guess.
>
> On Tue, 6 Jul 2010, Matt Ingenthron wrote:
>
> > Hi Sergei,
> >
> > For various reasons (performance, avoiding memory fragmentation),
> memcached
> > uses a memory allocation approach called slab allocation. The memcached
> > flavor of it can be found here:
> >
> > http://code.google.com/p/memcached/wiki/MemcachedSlabAllocator
> >
> > Chances are, your items didn't fit into the slabs defined. There are
> some
> > stats to see the details and you can potentially do some slab tuning.
> >
> > Hope that helps,
> >
> > - Matt
> >
> > siroga wrote:
> > > Hi,
> > > I just started playing with memcached. While doing very basic stuff I
> > > found one thing that confused me a lot.
> > > I have memcached running with default settings - 64M of memory for
> > > caching.
> > > 1. Called flushALL to clean the cache.
> > > 2. insert 100 of byte arrays 512K each - this should consume about 51M
> > > of memory so I should have enough space to keep all of them - and to
> > > very that call get() for each of them - as expected all arrays are
> > > present
> > > 3. I call flushAll again - so cache should be clear
> > > 4. insert 100 arrays of smaller size ( 256K). I also expected that I
> > > have enough memory to store them (overall I need about 26M), but
> > > surprisingly to me when calling get() only last 15 where found in the
> > > cache!!!
> > >
> > > It looks like memcached still hold memory occupied by first 100
> > > arrays.
> > > Memcache-top says that only 3.8M out of 64 used.
> > >
> > > Any info/explanation on memcached memory management details is very
> > > welcomed. Sorry if it is a well known feature, but I did not find much
> > > on a wiki that would suggest explanation.
> > >
> > > Regards,
> > > Sergei
> > >
> > > Here is my test program (I got the same result using both danga and
> > > spy.memcached. clients):
> > >
> > > MemCachedClient cl;
> > >
> > > @Test
> > > public void strange() throws Throwable
> > > {
> > > byte[] testLarge = new byte[1024*512];
> > > byte[] testSmall = new byte[1024*256];
> > > int COUNT = 100;
> > > cl.flushAll();
> > > Thread.sleep(1000);
> > > for (int i = 0; i < COUNT; i++)
> > > {
> > > cl.set("largekey" + i, testLarge, 600);
> > > }
> > > for (int i = 0; i < COUNT; i++)
> > > {
> > > if (null != cl.get("largekey" + i))
> > > {
> > > System.out.println("First not null " + i);
> > > break;
> > > }
> > > }
> > > Thread.sleep(1000);
> > > cl.flushAll();
> > > Thread.sleep(1000);
> > > for (int i = 0; i < COUNT; i++)
> > > {
> > > cl.set("smallkey" + i, testSmall, 600);
> > > }
> > > for (int i = 0; i < COUNT; i++)
> > > {
> > > if (null != cl.get("smallkey" + i))
> > > {
> > > System.out.println("First not null " + i);
> > > break;
> > > }
> > > }
> > >
> > > }
> > >
> >
> >
>
Brian Moon
Re: LRU mechanism question
July 07, 2010 02:40AM
Just to pile on, test data that is all the same size like that is
probably a very bad test of memcached. Most likely, all your data is not
the exact same size.

Brian.
--------
http://brian.moonspot.net/

On 7/6/10 5:36 PM, siroga wrote:
> Hi,
> I just started playing with memcached. While doing very basic stuff I
> found one thing that confused me a lot.
> I have memcached running with default settings - 64M of memory for
> caching.
> 1. Called flushALL to clean the cache.
> 2. insert 100 of byte arrays 512K each - this should consume about 51M
> of memory so I should have enough space to keep all of them - and to
> very that call get() for each of them - as expected all arrays are
> present
> 3. I call flushAll again - so cache should be clear
> 4. insert 100 arrays of smaller size ( 256K). I also expected that I
> have enough memory to store them (overall I need about 26M), but
> surprisingly to me when calling get() only last 15 where found in the
> cache!!!
>
> It looks like memcached still hold memory occupied by first 100
> arrays.
> Memcache-top says that only 3.8M out of 64 used.
>
> Any info/explanation on memcached memory management details is very
> welcomed. Sorry if it is a well known feature, but I did not find much
> on a wiki that would suggest explanation.
>
> Regards,
> Sergei
>
> Here is my test program (I got the same result using both danga and
> spy.memcached. clients):
>
> MemCachedClient cl;
>
> @Test
> public void strange() throws Throwable
> {
> byte[] testLarge = new byte[1024*512];
> byte[] testSmall = new byte[1024*256];
> int COUNT = 100;
> cl.flushAll();
> Thread.sleep(1000);
> for (int i = 0; i< COUNT; i++)
> {
> cl.set("largekey" + i, testLarge, 600);
> }
> for (int i = 0; i< COUNT; i++)
> {
> if (null != cl.get("largekey" + i))
> {
> System.out.println("First not null " + i);
> break;
> }
> }
> Thread.sleep(1000);
> cl.flushAll();
> Thread.sleep(1000);
> for (int i = 0; i< COUNT; i++)
> {
> cl.set("smallkey" + i, testSmall, 600);
> }
> for (int i = 0; i< COUNT; i++)
> {
> if (null != cl.get("smallkey" + i))
> {
> System.out.println("First not null " + i);
> break;
> }
> }
>
> }
Sergei Bobovich
RE: LRU mechanism question
July 07, 2010 04:10AM
Thanks, Brian,
I understand that. My goal here is to better understand possible limitations
and set expectations properly. Actually per what I saw in my tests (if the
second series of inserts will still be of 512K then all of them will be
stored successfully) I would conclude that if my data is about the same size
(let's say from 9 to 10K) then I will do much more better by having all data
pieces of the same size (align to 10K). Again this is a speculation without
knowing internals but my impression is that memcached successfully reuses
slots of the same size.

Regards,
Sergei

-----Original Message-----
From: Brian Moon [mailto:[email protected]]
Sent: Tuesday, July 06, 2010 8:36 PM
To: memcached@googlegroups.com
Cc: siroga
Subject: Re: LRU mechanism question

Just to pile on, test data that is all the same size like that is
probably a very bad test of memcached. Most likely, all your data is not
the exact same size.

Brian.
--------
http://brian.moonspot.net/

On 7/6/10 5:36 PM, siroga wrote:
> Hi,
> I just started playing with memcached. While doing very basic stuff I
> found one thing that confused me a lot.
> I have memcached running with default settings - 64M of memory for
> caching.
> 1. Called flushALL to clean the cache.
> 2. insert 100 of byte arrays 512K each - this should consume about 51M
> of memory so I should have enough space to keep all of them - and to
> very that call get() for each of them - as expected all arrays are
> present
> 3. I call flushAll again - so cache should be clear
> 4. insert 100 arrays of smaller size ( 256K). I also expected that I
> have enough memory to store them (overall I need about 26M), but
> surprisingly to me when calling get() only last 15 where found in the
> cache!!!
>
> It looks like memcached still hold memory occupied by first 100
> arrays.
> Memcache-top says that only 3.8M out of 64 used.
>
> Any info/explanation on memcached memory management details is very
> welcomed. Sorry if it is a well known feature, but I did not find much
> on a wiki that would suggest explanation.
>
> Regards,
> Sergei
>
> Here is my test program (I got the same result using both danga and
> spy.memcached. clients):
>
> MemCachedClient cl;
>
> @Test
> public void strange() throws Throwable
> {
> byte[] testLarge = new byte[1024*512];
> byte[] testSmall = new byte[1024*256];
> int COUNT = 100;
> cl.flushAll();
> Thread.sleep(1000);
> for (int i = 0; i< COUNT; i++)
> {
> cl.set("largekey" + i, testLarge, 600);
> }
> for (int i = 0; i< COUNT; i++)
> {
> if (null != cl.get("largekey" + i))
> {
> System.out.println("First not null " + i);
> break;
> }
> }
> Thread.sleep(1000);
> cl.flushAll();
> Thread.sleep(1000);
> for (int i = 0; i< COUNT; i++)
> {
> cl.set("smallkey" + i, testSmall, 600);
> }
> for (int i = 0; i< COUNT; i++)
> {
> if (null != cl.get("smallkey" + i))
> {
> System.out.println("First not null " + i);
> break;
> }
> }
>
> }
Guille -bisho-
Re: LRU mechanism question
July 07, 2010 07:00AM
If your memory is very low (only 64m), its would work better the
smaller the chunks are, or slabs for big chunks will ocupy a lot of
memory. With gigs of RAM (typically people with dedicated memcaches
reserve 70-80% of total RAM) the slab allocation does not pose any
problem.

I agree that a flush should probably also remove allocated slabs, but
flush is really never used in production for obvoius reasons :)

On 2010-07-07, Sergei Bobovich <[email protected]> wrote:
> Thanks, Brian,
> I understand that. My goal here is to better understand possible limitations
> and set expectations properly. Actually per what I saw in my tests (if the
> second series of inserts will still be of 512K then all of them will be
> stored successfully) I would conclude that if my data is about the same size
> (let's say from 9 to 10K) then I will do much more better by having all data
> pieces of the same size (align to 10K). Again this is a speculation without
> knowing internals but my impression is that memcached successfully reuses
> slots of the same size.
>
> Regards,
> Sergei
>
> -----Original Message-----
> From: Brian Moon [mailto:[email protected]]
> Sent: Tuesday, July 06, 2010 8:36 PM
> To: memcached@googlegroups.com
> Cc: siroga
> Subject: Re: LRU mechanism question
>
> Just to pile on, test data that is all the same size like that is
> probably a very bad test of memcached. Most likely, all your data is not
> the exact same size.
>
> Brian.
> --------
> http://brian.moonspot.net/
>
> On 7/6/10 5:36 PM, siroga wrote:
>> Hi,
>> I just started playing with memcached. While doing very basic stuff I
>> found one thing that confused me a lot.
>> I have memcached running with default settings - 64M of memory for
>> caching.
>> 1. Called flushALL to clean the cache.
>> 2. insert 100 of byte arrays 512K each - this should consume about 51M
>> of memory so I should have enough space to keep all of them - and to
>> very that call get() for each of them - as expected all arrays are
>> present
>> 3. I call flushAll again - so cache should be clear
>> 4. insert 100 arrays of smaller size ( 256K). I also expected that I
>> have enough memory to store them (overall I need about 26M), but
>> surprisingly to me when calling get() only last 15 where found in the
>> cache!!!
>>
>> It looks like memcached still hold memory occupied by first 100
>> arrays.
>> Memcache-top says that only 3.8M out of 64 used.
>>
>> Any info/explanation on memcached memory management details is very
>> welcomed. Sorry if it is a well known feature, but I did not find much
>> on a wiki that would suggest explanation.
>>
>> Regards,
>> Sergei
>>
>> Here is my test program (I got the same result using both danga and
>> spy.memcached. clients):
>>
>> MemCachedClient cl;
>>
>> @Test
>> public void strange() throws Throwable
>> {
>> byte[] testLarge = new byte[1024*512];
>> byte[] testSmall = new byte[1024*256];
>> int COUNT = 100;
>> cl.flushAll();
>> Thread.sleep(1000);
>> for (int i = 0; i< COUNT; i++)
>> {
>> cl.set("largekey" + i, testLarge, 600);
>> }
>> for (int i = 0; i< COUNT; i++)
>> {
>> if (null != cl.get("largekey" + i))
>> {
>> System.out.println("First not null " + i);
>> break;
>> }
>> }
>> Thread.sleep(1000);
>> cl.flushAll();
>> Thread.sleep(1000);
>> for (int i = 0; i< COUNT; i++)
>> {
>> cl.set("smallkey" + i, testSmall, 600);
>> }
>> for (int i = 0; i< COUNT; i++)
>> {
>> if (null != cl.get("smallkey" + i))
>> {
>> System.out.println("First not null " + i);
>> break;
>> }
>> }
>>
>> }
>
>


--
Guille -ℬḭṩḩø- <[email protected]>
:wq
Jakub Łopuszański
Re: LRU mechanism question
July 07, 2010 03:00PM
If you seek to have better control over memory usage, use my patch
http://groups.google.com/group/memcached/msg/a5b5188207081a2b?pli=1
We use it successfully in http://nasza-klasa.pl
It frees unused objects as soon as they are expired, which gives you far
lower memory consumption, and more informative charts in munin.

On Wed, Jul 7, 2010 at 6:56 AM, Guille -bisho- <[email protected]> wrote:

> If your memory is very low (only 64m), its would work better the
> smaller the chunks are, or slabs for big chunks will ocupy a lot of
> memory. With gigs of RAM (typically people with dedicated memcaches
> reserve 70-80% of total RAM) the slab allocation does not pose any
> problem.
>
> I agree that a flush should probably also remove allocated slabs, but
> flush is really never used in production for obvoius reasons :)
>
> On 2010-07-07, Sergei Bobovich <[email protected]> wrote:
> > Thanks, Brian,
> > I understand that. My goal here is to better understand possible
> limitations
> > and set expectations properly. Actually per what I saw in my tests (if
> the
> > second series of inserts will still be of 512K then all of them will be
> > stored successfully) I would conclude that if my data is about the same
> size
> > (let's say from 9 to 10K) then I will do much more better by having all
> data
> > pieces of the same size (align to 10K). Again this is a speculation
> without
> > knowing internals but my impression is that memcached successfully reuses
> > slots of the same size.
> >
> > Regards,
> > Sergei
> >
> > -----Original Message-----
> > From: Brian Moon [mailto:[email protected]]
> > Sent: Tuesday, July 06, 2010 8:36 PM
> > To: memcached@googlegroups.com
> > Cc: siroga
> > Subject: Re: LRU mechanism question
> >
> > Just to pile on, test data that is all the same size like that is
> > probably a very bad test of memcached. Most likely, all your data is not
> > the exact same size.
> >
> > Brian.
> > --------
> > http://brian.moonspot.net/
> >
> > On 7/6/10 5:36 PM, siroga wrote:
> >> Hi,
> >> I just started playing with memcached. While doing very basic stuff I
> >> found one thing that confused me a lot.
> >> I have memcached running with default settings - 64M of memory for
> >> caching.
> >> 1. Called flushALL to clean the cache.
> >> 2. insert 100 of byte arrays 512K each - this should consume about 51M
> >> of memory so I should have enough space to keep all of them - and to
> >> very that call get() for each of them - as expected all arrays are
> >> present
> >> 3. I call flushAll again - so cache should be clear
> >> 4. insert 100 arrays of smaller size ( 256K). I also expected that I
> >> have enough memory to store them (overall I need about 26M), but
> >> surprisingly to me when calling get() only last 15 where found in the
> >> cache!!!
> >>
> >> It looks like memcached still hold memory occupied by first 100
> >> arrays.
> >> Memcache-top says that only 3.8M out of 64 used.
> >>
> >> Any info/explanation on memcached memory management details is very
> >> welcomed. Sorry if it is a well known feature, but I did not find much
> >> on a wiki that would suggest explanation.
> >>
> >> Regards,
> >> Sergei
> >>
> >> Here is my test program (I got the same result using both danga and
> >> spy.memcached. clients):
> >>
> >> MemCachedClient cl;
> >>
> >> @Test
> >> public void strange() throws Throwable
> >> {
> >> byte[] testLarge = new byte[1024*512];
> >> byte[] testSmall = new byte[1024*256];
> >> int COUNT = 100;
> >> cl.flushAll();
> >> Thread.sleep(1000);
> >> for (int i = 0; i< COUNT; i++)
> >> {
> >> cl.set("largekey" + i, testLarge, 600);
> >> }
> >> for (int i = 0; i< COUNT; i++)
> >> {
> >> if (null != cl.get("largekey" + i))
> >> {
> >> System.out.println("First not null " + i);
> >> break;
> >> }
> >> }
> >> Thread.sleep(1000);
> >> cl.flushAll();
> >> Thread.sleep(1000);
> >> for (int i = 0; i< COUNT; i++)
> >> {
> >> cl.set("smallkey" + i, testSmall, 600);
> >> }
> >> for (int i = 0; i< COUNT; i++)
> >> {
> >> if (null != cl.get("smallkey" + i))
> >> {
> >> System.out.println("First not null " + i);
> >> break;
> >> }
> >> }
> >>
> >> }
> >
> >
>
>
> --
> Guille -ℬḭṩḩø- <[email protected]>
> :wq
>
Adam Lee
Re: LRU mechanism question
July 07, 2010 07:30PM
That's not really true in practice. Yes, memcached does reuse slots, but
your items don't need to actually be the exact same size, they just need to
be in the same slab class. In production, you'll probably never run into a
situation like your test where 100% of the slab space is allocated to the
same item size.

Memcached is very good at what it does.

On Tue, Jul 6, 2010 at 10:03 PM, Sergei Bobovich <[email protected]>wrote:

> Thanks, Brian,
> I understand that. My goal here is to better understand possible
> limitations
> and set expectations properly. Actually per what I saw in my tests (if the
> second series of inserts will still be of 512K then all of them will be
> stored successfully) I would conclude that if my data is about the same
> size
> (let's say from 9 to 10K) then I will do much more better by having all
> data
> pieces of the same size (align to 10K). Again this is a speculation without
> knowing internals but my impression is that memcached successfully reuses
> slots of the same size.
>
> Regards,
> Sergei
>
> -----Original Message-----
> From: Brian Moon [mailto:[email protected]]
> Sent: Tuesday, July 06, 2010 8:36 PM
> To: memcached@googlegroups.com
> Cc: siroga
> Subject: Re: LRU mechanism question
>
> Just to pile on, test data that is all the same size like that is
> probably a very bad test of memcached. Most likely, all your data is not
> the exact same size.
>
> Brian.
> --------
> http://brian.moonspot.net/
>
> On 7/6/10 5:36 PM, siroga wrote:
> > Hi,
> > I just started playing with memcached. While doing very basic stuff I
> > found one thing that confused me a lot.
> > I have memcached running with default settings - 64M of memory for
> > caching.
> > 1. Called flushALL to clean the cache.
> > 2. insert 100 of byte arrays 512K each - this should consume about 51M
> > of memory so I should have enough space to keep all of them - and to
> > very that call get() for each of them - as expected all arrays are
> > present
> > 3. I call flushAll again - so cache should be clear
> > 4. insert 100 arrays of smaller size ( 256K). I also expected that I
> > have enough memory to store them (overall I need about 26M), but
> > surprisingly to me when calling get() only last 15 where found in the
> > cache!!!
> >
> > It looks like memcached still hold memory occupied by first 100
> > arrays.
> > Memcache-top says that only 3.8M out of 64 used.
> >
> > Any info/explanation on memcached memory management details is very
> > welcomed. Sorry if it is a well known feature, but I did not find much
> > on a wiki that would suggest explanation.
> >
> > Regards,
> > Sergei
> >
> > Here is my test program (I got the same result using both danga and
> > spy.memcached. clients):
> >
> > MemCachedClient cl;
> >
> > @Test
> > public void strange() throws Throwable
> > {
> > byte[] testLarge = new byte[1024*512];
> > byte[] testSmall = new byte[1024*256];
> > int COUNT = 100;
> > cl.flushAll();
> > Thread.sleep(1000);
> > for (int i = 0; i< COUNT; i++)
> > {
> > cl.set("largekey" + i, testLarge, 600);
> > }
> > for (int i = 0; i< COUNT; i++)
> > {
> > if (null != cl.get("largekey" + i))
> > {
> > System.out.println("First not null " + i);
> > break;
> > }
> > }
> > Thread.sleep(1000);
> > cl.flushAll();
> > Thread.sleep(1000);
> > for (int i = 0; i< COUNT; i++)
> > {
> > cl.set("smallkey" + i, testSmall, 600);
> > }
> > for (int i = 0; i< COUNT; i++)
> > {
> > if (null != cl.get("smallkey" + i))
> > {
> > System.out.println("First not null " + i);
> > break;
> > }
> > }
> >
> > }
>
>


--
awl
Sorry, only registered users may post in this forum.

Click here to login