Welcome! Log In Create A New Profile

Advanced

Updated: Problems about two memcached java clients: spy and gwhalin

Posted by Shi Yu 
Follow up my previous question. I tried to read those 6 million
<key,value> records out. Both API are capable, but the spymemcache API
is faster (16 minutes) than the Whalin's v2.5.1 distribution (24
minutes) (http://github.com/gwhalin/Memcached-Java-Client/downloads)
..Since the spymemcache cannot insert that much, I came to a strange
hybrid settings using Whalin's API to store, and using spymemcache to
read ...

I am really curious why spy cannot store up to 6 million...


On Sat, Oct 16, 2010 at 12:59 PM, Shi Yu <[email protected]> wrote:
> Hi,
>
> I have two problems when using memcached java clients the spymemcached
> (http://code.google.com/p/spymemcached/) and the gwhalin java client
> for memcached (http://github.com/gwhalin/Memcached-Java-Client). I
> found that the spymemcached failed to store more than 4.3 million
> records sometimes 3.7 million (please see my code below). There was no
> error no exception, but simply the code automatically stopped at the
> 4.3 million and didn't even hit the final line. In contrast, the
> gwhalin java client was able to insert 6 million records without
> problem, however, comparing the speed of inserting the first 4 million
> records that the gwhalin client is much slower than the spymemcached.
> The memcached server is set up using the following command
> "./memcached -d -m 4000 127.0.0.1 -p 11211" and I think there is no
> problem at the server side. What is the problem here, should I adjust
> any settings? Thanks?
>
> -Shi
>
>
> //spymemcached code
> public static void main(String[] args) throws Exception {
>    MemcachedClient mc=new MemcachedClient(new
> InetSocketAddress("ocuic32.research", 11211));
>     mc.flush();
>     System.out.println("Memchaced flushed ...");
>     int count = 0;
>     for(int i=0;i<6000000;i++){
>            String a = "String"+i;
>            String b = "Value"+i;
>
>            mc.add(a,i,(String) b);
>            count ++;
>            if (String.valueOf(count).endsWith("00000"))
> System.out.println(count+ " elements added.");
>    }
>
>    System.out.println("done "+ count +" records inserted");
> //spymemcached aint able to get this line
> }
>
>
>
> //gwhalin memcached code
> public static void main(String[] args) throws Exception {
>                BasicConfigurator.configure();
>                String[] servers = { "ocuic32.research:11211" };
>                SockIOPool pool = SockIOPool.getInstance();
>                pool.setServers( servers );
>                pool.setFailover( true );
>                pool.setInitConn( 10 );
>                pool.setMinConn( 5 );
>                pool.setMaxConn( 250 );
>                pool.setMaintSleep( 30 );
>                pool.setNagle( false );
>                pool.setSocketTO( 3000 );
>                pool.setAliveCheck( true );
>                pool.initialize();
>
>                 MemcachedClient mcc = new MemcachedClient();
>                 mcc.flushAll();
>                 int count = 0;
>                 int maxlength = 0;
>                 //while((line=br.readLine())!=null){
>
>                 for(int i=0;i<6000000;i++){
>                     String a = "String"+i;
>                     String b = "Value"+i;
>                     String sha1_ad1 = AeSimpleSHA1.SHA1(a);
>                     mcc.set(sha1_ad1,(String) b);
>                     count ++;
>                     if (String.valueOf(count).endsWith("00000"))
> System.out.println(count+ " elements added.");
>
>                 }
>
>               System.out.println("done "+ count +" records
> inserted");  //gwhalin 's client is able to get this line, but very
> slow
>      }
>
<<I am really curious why spy cannot store up to 6 million...

I'd definitely spend some more time analyzing what's going on if I
were you before going down that road. Turn up gc logging
(-verbose:gc), see if the app is heavily gc-ing when the program
"stops"; dump threads (kill -3 pid) etc....

On Sat, Oct 16, 2010 at 8:27 PM, Shi Yu <[email protected]> wrote:
> Follow up my previous question. I tried to read those 6 million
> <key,value> records out. Both API are capable, but the spymemcache API
> is faster (16 minutes) than the Whalin's v2.5.1 distribution  (24
> minutes) (http://github.com/gwhalin/Memcached-Java-Client/downloads)
> .Since the spymemcache cannot insert that much, I came to a strange
> hybrid settings using Whalin's API to store, and using spymemcache to
> read ...
>
> I am really curious why spy cannot store up to 6 million...
>
>
> On Sat, Oct 16, 2010 at 12:59 PM, Shi Yu <[email protected]> wrote:
>> Hi,
>>
>> I have two problems when using memcached java clients the spymemcached
>> (http://code.google.com/p/spymemcached/) and the gwhalin java client
>> for memcached (http://github.com/gwhalin/Memcached-Java-Client). I
>> found that the spymemcached failed to store more than 4.3 million
>> records sometimes 3.7 million (please see my code below). There was no
>> error no exception, but simply the code automatically stopped at the
>> 4.3 million and didn't even hit the final line. In contrast, the
>> gwhalin java client was able to insert 6 million records without
>> problem, however, comparing the speed of inserting the first 4 million
>> records that the gwhalin client is much slower than the spymemcached.
>> The memcached server is set up using the following command
>> "./memcached -d -m 4000 127.0.0.1 -p 11211" and I think there is no
>> problem at the server side. What is the problem here, should I adjust
>> any settings? Thanks?
>>
>> -Shi
>>
>>
>> //spymemcached code
>> public static void main(String[] args) throws Exception {
>>    MemcachedClient mc=new MemcachedClient(new
>> InetSocketAddress("ocuic32.research", 11211));
>>     mc.flush();
>>     System.out.println("Memchaced flushed ...");
>>     int count = 0;
>>     for(int i=0;i<6000000;i++){
>>            String a = "String"+i;
>>            String b = "Value"+i;
>>
>>            mc.add(a,i,(String) b);
>>            count ++;
>>            if (String.valueOf(count).endsWith("00000"))
>> System.out.println(count+ " elements added.");
>>    }
>>
>>    System.out.println("done "+ count +" records inserted");
>> //spymemcached aint able to get this line
>> }
>>
>>
>>
>> //gwhalin memcached code
>> public static void main(String[] args) throws Exception {
>>                BasicConfigurator..configure();
>>                String[] servers = { "ocuic32.research:11211" };
>>                SockIOPool pool = SockIOPool.getInstance();
>>                pool.setServers( servers );
>>                pool.setFailover( true );
>>                pool.setInitConn( 10 );
>>                pool.setMinConn( 5 );
>>                pool.setMaxConn( 250 );
>>                pool.setMaintSleep( 30 );
>>                pool.setNagle( false );
>>                pool.setSocketTO( 3000 );
>>                pool.setAliveCheck( true );
>>                pool.initialize();
>>
>>                 MemcachedClient mcc = new MemcachedClient();
>>                 mcc.flushAll();
>>                 int count = 0;
>>                 int maxlength = 0;
>>                 //while((line=br.readLine())!=null){
>>
>>                 for(int i=0;i<6000000;i++){
>>                     String a = "String"+i;
>>                     String b = "Value"+i;
>>                     String sha1_ad1 = AeSimpleSHA1.SHA1(a);
>>                     mcc.set(sha1_ad1,(String) b);
>>                     count ++;
>>                     if (String.valueOf(count).endsWith("00000"))
>> System.out.println(count+ " elements added.");
>>
>>                 }
>>
>>               System.out.println("done "+ count +" records
>> inserted");  //gwhalin 's client is able to get this line, but very
>> slow
>>      }
>>
>
On Oct 16, 5:27 pm, Shi Yu <[email protected]> wrote:

> I am really curious why spy cannot store up to 6 million...

I'm quite sure you can add more than 6M items to something. Part of
the problem might be that you're adding to an in-memory queue as fast
as possible, not checking results, or slowing down when you blow past
maximum queue depths. That's not "normal" use of the simple cache
operation APIs.

If you just want to go fast, use the CacheLoader API:

http://dustin.github.com/java-memcached-client/apidocs/net/spy/memcached/util/CacheLoader.html

That will ensure you're balancing the speed of the fast loop in the
JVM against the time it takes to get over the network.
I have also tried the CacheLoader API, it pops a java GC error. The
thing I haven't tried is to separate 6 million records into several
objects and try CacheLoader. But I don't think it should be that
fragile and complicated. I have spent a whole day on this issue, now I
just rely the hybrid approach to finish the work. But I would be very
interested to hear any solution to solve this issue.

Shi

On Sat, Oct 16, 2010 at 8:33 PM, Dustin <[email protected]> wrote:
>
> On Oct 16, 5:27 pm, Shi Yu <[email protected]> wrote:
>
>> I am really curious why spy cannot store up to 6 million...
>
>  I'm quite sure you can add more than 6M items to something.  Part of
> the problem might be that you're adding to an in-memory queue as fast
> as possible, not checking results, or slowing down when you blow past
> maximum queue depths.  That's not "normal" use of the simple cache
> operation APIs.
>
>  If you just want to go fast, use the CacheLoader API:
>
>    http://dustin.github.com/java-memcached-client/apidocs/net/spy/memcached/util/CacheLoader.html
>
>  That will ensure you're balancing the speed of the fast loop in the
> JVM against the time it takes to get over the network.
On Oct 16, 6:45 pm, Shi Yu <[email protected]> wrote:
> I have also tried the CacheLoader API, it pops a java GC error. The
> thing I haven't tried is to separate 6 million records into several
> objects and try CacheLoader. But I don't think it should be that
> fragile and complicated. I have spent a whole day on this issue, now I
> just rely the hybrid approach to finish the work. But I would be very
> interested to hear any solution to solve this issue.

I cannot make any suggestions as to why you got an error without
knowing what you did and what error you got.

I would not expect the same that you posted to work without a lot of
memory, tweaking, and a very fast network since you're just filling an
output queue as fast as java will allow you.

You didn't share any code using CacheLoader, so I can only guess as
to how you may have used it to get an error. There are three
different methods you can use -- did you try to create a map with six
million values and then pass it to the CacheLoader API (that would
very likely give you an out of memory error).

You could also be taxing the GC considerably by converting integers
to strings to compute modulus if your jvm doesn't do proper escape
analysis.

I can assure you there's no magic that will make it fail to load six
million records through the API as long as you account for the
realities of your network (which CacheLoader does for you) and your
available memory.
Okay. I have to empty the useful data from my memcache server to do
the experiment again. The code of method is as follows.

public static void mapload() throws Exception{
MemcachedClient mc=new MemcachedClient(new
InetSocketAddress("ocuic32.research", 11211));
mc.flush();
System.out.println("Memchaced flushed ...");
CacheLoader cl = new CacheLoader(mc);
System.out.println("Cache loader created ...");
Map<String,String> map1 = new HashMap<String,String>();
Map<String,String> map2 = new HashMap<String,String>();
Map<String,String> map3 = new HashMap<String,String>();


for (int i=0;i<1999999;i++){
map1.put("key"+i,"value"+i);
}
try{
cl.loadData(map1);

System.out.println("map1 loaded");
}catch(Exception e1){
e1.printStackTrace();
}

map1=null;

for (int i=2000000;i<3999999;i++){
map2.put("key"+i,"value"+i);

}
try{
cl.loadData(map2);

System.out.println("map2 loaded");
}catch(Exception e2){
e2.printStackTrace();
}

map2=null;

for (int i=4000000;i<5999999;i++){
map3.put("key"+i,"value"+i);
}
try{
cl.loadData(map3);
System.out.println("map3 loaded");
}catch(Exception e3){
e3.printStackTrace();
}
map3=null;

System.out.println("All done");
}

And I run with the following java command on a 64-bit Unix machine
which has 8G memory. I separate the Map into three parts, still
failed. TBH I think there is some bug in the spymemcached input
method. With Whalin's API there is no any problem with only 2G heap
size, just a little bit slower but thats definitely better than being
stuck for 6 hours on a bugged API.

java -Xms4G -Xmx4G -classpath ./lib/spymemcached-2.5.jar Memcaceload

Here is the error output:

2010-10-16 22:40:50.959 INFO net.spy.memcached.MemcachedConnection:
Added {QA sa=ocuic32.research/192.168.136.36:11211, #Rops=0, #Wops=0,
#iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect
queue
Memchaced flushed ...
Cache loader created ...
2010-10-16 22:40:50.989 INFO net.spy.memcached.MemcachedConnection:
Connection state changed for [email protected]
map1 loaded
map2 loaded
java.lang.OutOfMemoryError: Java heap space
at sun.nio.cs.UTF_8.newEncoder(UTF_8.java:51)
at java.lang.StringCoding$StringEncoder.<init>(StringCoding.java:215)
at java.lang.StringCoding$StringEncoder.<init>(StringCoding.java:207)
at java.lang.StringCoding.encode(StringCoding.java:266)
at java.lang.String.getBytes(String.java:947)
at net.spy.memcached.KeyUtil.getKeyBytes(KeyUtil.java:20)
at net.spy.memcached.protocol.ascii.OperationImpl.setArguments(OperationImpl.java:86)
at net.spy.memcached.protocol.ascii.BaseStoreOperationImpl.initialize(BaseStoreOperationImpl.java:48)
at net.spy.memcached.MemcachedConnection.addOperation(MemcachedConnection.java:601)
at net.spy.memcached.MemcachedConnection.addOperation(MemcachedConnection.java:582)
at net.spy.memcached.MemcachedClient.addOp(MemcachedClient.java:277)
at net.spy.memcached.MemcachedClient.asyncStore(MemcachedClient.java:314)
at net.spy.memcached.MemcachedClient.set(MemcachedClient.java:691)
at net.spy.memcached.util.CacheLoader.push(CacheLoader.java:92)
at net.spy.memcached.util.CacheLoader.loadData(CacheLoader.java:61)
at net.spy.memcached.util.CacheLoader.loadData(CacheLoader.java:75)
at MemchacedLoad.mapload(MemchacedLoad.java:90)
at MemchacedLoad.main(MemchacedLoad.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:165)
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)

Shi

On Sat, Oct 16, 2010 at 10:23 PM, Dustin <[email protected]> wrote:
>
> On Oct 16, 6:45 pm, Shi Yu <[email protected]> wrote:
>> I have also tried the CacheLoader API, it pops a java GC error. The
>> thing I haven't tried is to separate 6 million records into several
>> objects and try CacheLoader. But I don't think it should be that
>> fragile and complicated. I have spent a whole day on this issue, now I
>> just rely the hybrid approach to finish the work. But I would be very
>> interested to hear any solution to solve this issue.
>
>  I cannot make any suggestions as to why you got an error without
> knowing what you did and what error you got.
>
>  I would not expect the same that you posted to work without a lot of
> memory, tweaking, and a very fast network since you're just filling an
> output queue as fast as java will allow you.

>  You didn't share any code using CacheLoader, so I can only guess as
> to how you may have used it to get an error.  There are three
> different methods you can use -- did you try to create a map with six
> million values and then pass it to the CacheLoader API (that would
> very likely give you an out of memory error).


>
>  You could also be taxing the GC considerably by converting integers
> to strings to compute modulus if your jvm doesn't do proper escape
> analysis.
>
>  I can assure you there's no magic that will make it fail to load six
> million records through the API as long as you account for the
> realities of your network (which CacheLoader does for you) and your
> available memory.
Okay, nice to know that. Is there any asynchronous writing method in
spy? I don't understand, I have set the first two maps to null when
writing the third one, so you mean the memclient still reads over all
the existing records in server when inserting new data? Or its a heap
size error at the server side?

On Sun, Oct 17, 2010 at 12:36 AM, Jonathan Leech <[email protected]> wrote:
> Sounds like bug 125 to me. Your thread inserting the records can write them
> to the queue faster than they are written from the queue to memcached, the
> queue fills up with more and more records, and each one takes longer than
> the last to get written. If you are also bouncing up against the upper limit
> of the heap, then the VM will compound the issue spending a lot of time
> garbage collecting.
>
>
>
> On Oct 16, 2010, at 6:53 PM, Boris Partensky <[email protected]>
> wrote:
>
>> <<I am really curious why spy cannot store up to 6 million...
>>
>> I'd definitely spend some more time analyzing what's going on if I
>> were you before going down that road. Turn up gc logging
>> (-verbose:gc), see if the app is heavily gc-ing when the program
>> "stops"; dump threads (kill -3 pid) etc....
>>
>> On Sat, Oct 16, 2010 at 8:27 PM, Shi Yu <[email protected]> wrote:
>>>
>>> Follow up my previous question. I tried to read those 6 million
>>> <key,value> records out. Both API are capable, but the spymemcache API
>>> is faster (16 minutes) than the Whalin's v2.5.1 distribution  (24
>>> minutes) (http://github.com/gwhalin/Memcached-Java-Client/downloads)
>>> .Since the spymemcache cannot insert that much, I came to a strange
>>> hybrid settings using Whalin's API to store, and using spymemcache to
>>> read ...
>>>
>>> I am really curious why spy cannot store up to 6 million...
>>>
>>>
>>> On Sat, Oct 16, 2010 at 12:59 PM, Shi Yu <[email protected]> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I have two problems when using memcached java clients the spymemcached
>>>> (http://code.google.com/p/spymemcached/) and the gwhalin java client
>>>> for memcached (http://github.com/gwhalin/Memcached-Java-Client). I
>>>> found that the spymemcached failed to store more than 4.3 million
>>>> records sometimes 3.7 million (please see my code below). There was no
>>>> error no exception, but simply the code automatically stopped at the
>>>> 4.3 million and didn't even hit the final line. In contrast, the
>>>> gwhalin java client was able to insert 6 million records without
>>>> problem, however, comparing the speed of inserting the first 4 million
>>>> records that the gwhalin client is much slower than the spymemcached.
>>>> The memcached server is set up using the following command
>>>> "./memcached -d -m 4000 127.0.0.1 -p 11211" and I think there is no
>>>> problem at the server side. What is the problem here, should I adjust
>>>> any settings? Thanks?
>>>>
>>>> -Shi
>>>>
>>>>
>>>> //spymemcached code
>>>> public static void main(String[] args) throws Exception {
>>>>   MemcachedClient mc=new MemcachedClient(new
>>>> InetSocketAddress("ocuic32.research", 11211));
>>>>    mc.flush();
>>>>    System.out.println("Memchaced flushed ...");
>>>>    int count = 0;
>>>>    for(int i=0;i<6000000;i++){
>>>>           String a = "String"+i;
>>>>           String b = "Value"+i;
>>>>
>>>>           mc.add(a,i,(String) b);
>>>>           count ++;
>>>>           if (String.valueOf(count).endsWith("00000"))
>>>> System.out.println(count+ " elements added.");
>>>>   }
>>>>
>>>>   System.out.println("done "+ count +" records inserted");
>>>> //spymemcached aint able to get this line
>>>> }
>>>>
>>>>
>>>>
>>>> //gwhalin memcached code
>>>> public static void main(String[] args) throws Exception {
>>>>               BasicConfigurator.configure();
>>>>               String[] servers = { "ocuic32.research:11211" };
>>>>               SockIOPool pool = SockIOPool.getInstance();
>>>>               pool.setServers( servers );
>>>>               pool.setFailover( true );
>>>>               pool.setInitConn( 10 );
>>>>               pool.setMinConn( 5 );
>>>>               pool.setMaxConn( 250 );
>>>>               pool.setMaintSleep( 30 );
>>>>               pool.setNagle( false );
>>>>               pool.setSocketTO( 3000 );
>>>>               pool.setAliveCheck( true );
>>>>               pool.initialize();
>>>>
>>>>                MemcachedClient mcc = new MemcachedClient();
>>>>                mcc.flushAll();
>>>>                int count = 0;
>>>>                int maxlength = 0;
>>>>                //while((line=br.readLine())!=null){
>>>>
>>>>                for(int i=0;i<6000000;i++){
>>>>                    String a = "String"+i;
>>>>                    String b = "Value"+i;
>>>>                    String sha1_ad1 = AeSimpleSHA1.SHA1(a);
>>>>                    mcc.set(sha1_ad1,(String) b);
>>>>                    count ++;
>>>>                    if (String.valueOf(count).endsWith("00000"))
>>>> System.out.println(count+ " elements added.");
>>>>
>>>>                }
>>>>
>>>>              System.out.println("done "+ count +" records
>>>> inserted");  //gwhalin 's client is able to get this line, but very
>>>> slow
>>>>     }
>>>>
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "spymemcached" group.
>> To post to this group, send email to spymemcached@googlegroups.com.
>> To unsubscribe from this group, send email to
>> spymemcached+unsubscribe@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/spymemcached?hl=en.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "spymemcached" group.
> To post to this group, send email to spymemcached@googlegroups.com.
> To unsubscribe from this group, send email to
> spymemcached+unsubscribe@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/spymemcached?hl=en.
>
>
On Oct 16, 9:28 pm, Shi Yu <[email protected]> wrote:
>         Map<String,String> map1 = new HashMap<String,String>();
>         Map<String,String> map2 = new HashMap<String,String>();
>         Map<String,String> map3 = new HashMap<String,String>();

You're loading at least four million strings into two million hash
table entries on a 64-bit system. Each hash table entry contains a
pointer to a key, a pointer to a value, a pointer to another entry,
and an integer copy of the hash code that's there. That's a huge
amount of memory just to load up a generated set of data.

If you instead either implemented an iterator as a generator to
dynamically do what that large map is doing or just used the
CacheLoader.push method in a way similar to how you were doing add
before, I suspect you'd have no problems and significantly less memory
consumption.
Hi Dustin. I have to go on my work now so I probably wont spend any
time on this issue. Please, before you suggest, try some experiment to
load more than 6 million records using the same API. I would be happy
to hear how you do that. I now fully rely on Whalin's API, it can
handle 14 million records without any problem.

Shi

On Sun, Oct 17, 2010 at 1:45 AM, Dustin <[email protected]> wrote:
>
> On Oct 16, 9:28 pm, Shi Yu <[email protected]> wrote:
>>         Map<String,String> map1 = new HashMap<String,String>();
>>         Map<String,String> map2 = new HashMap<String,String>();
>>         Map<String,String> map3 = new HashMap<String,String>();
>
>  You're loading at least four million strings into two million hash
> table entries on a 64-bit system.  Each hash table entry contains a
> pointer to a key, a pointer to a value, a pointer to another entry,
> and an integer copy of the hash code that's there.  That's a huge
> amount of memory just to load up a generated set of data.
>
>  If you instead either implemented an iterator as a generator to
> dynamically do what that large map is doing or just used the
> CacheLoader.push method in a way similar to how you were doing add
> before, I suspect you'd have no problems and significantly less memory
> consumption.
On Oct 16, 11:51 pm, Shi Yu <[email protected]> wrote:
> Hi Dustin. I have to go on my work now so I probably wont spend any
> time on this issue. Please, before you suggest, try some experiment to
> load more than 6 million records using the same API. I would be happy
> to hear how you do that.  I now fully rely on Whalin's API, it can
> handle 14 million records without any problem.

I do actually try the code now and then -- I have this in the source
tree which I think is very close to what you were trying to do:

http://github.com/dustin/java-memcached-client/blob/master/src/test/manual/net/spy/memcached/test/LoaderTest.java

I modified that as I suggested to set the op timeout to 0 and the
number of items to 14,000,000 (which you mentioned here). Running
both the server and the client on my fairly old MBP, it reported
144534ms (which is a bit over 97k sets/second).
Thanks Dustin, marked and will try again.

On Sun, Oct 17, 2010 at 2:38 AM, Dustin <[email protected]> wrote:
>
> On Oct 16, 11:51 pm, Shi Yu <[email protected]> wrote:
>> Hi Dustin. I have to go on my work now so I probably wont spend any
>> time on this issue. Please, before you suggest, try some experiment to
>> load more than 6 million records using the same API. I would be happy
>> to hear how you do that.  I now fully rely on Whalin's API, it can
>> handle 14 million records without any problem.
>
>  I do actually try the code now and then -- I have this in the source
> tree which I think is very close to what you were trying to do:
>
>  http://github.com/dustin/java-memcached-client/blob/master/src/test/manual/net/spy/memcached/test/LoaderTest.java
>
>  I modified that as I suggested to set the op timeout to 0 and the
> number of items to 14,000,000 (which you mentioned here).  Running
> both the server and the client on my fairly old MBP, it reported
> 144534ms (which is a bit over 97k sets/second).
>
Sorry, only registered users may post in this forum.

Click here to login