Details
-
Bug
-
Resolution: Duplicate
-
Major
-
2.0
-
Security Level: Public
-
centos 6.2 64bit toy build 2.0.0-1003
Description
Cluster information:
- 8 centos 6.2 64bit server with 4 cores CPU
- Each server has 32 GB RAM and 390 GB SSD disk.
- SSD disk format ext4 on /data
- Each server has its own drive, no disk sharing with other server.
- Cluster has 2 buckets, default (12GB) and saslbucket (12GB) with consistent view enable.
- Load 9 million items to each bucket. Each key has size from 512 bytes to 1024 bytes
- Each bucket has one doc and 2 views for each doc (default d1 and saslbucket d11)
- Create cluster with 6 nodes installed toy build 2.0.0-1003
- Cluster has constant load of 8K ops, view query, view compaction and data compaction running all time
10.6.2.37
10.6.2.38
10.6.2.39
10.6.2.40
10.6.2.42
10.6.2.43
- Data path /data
- View path /data
- When number items reach about 7+ millions, cluster starts showing memached timeout
- Then I stopped all loads to cluster, let cluster idle around 23:00 Oct 4 2012 until now.
- There are a log popup error l"write commit failed" even disk still has a lot of disk space
10.6.2.42
Total disk: 394G Used: 108G Free: 286G Percent used: 28% Disk: /data
10.6.2.39
Total disk: 394G Used: 143G Free: 232G Percent used: 39% Disk: /data
10.6.2.40
Total disk: 394G Used: 53G Free: 322G Percent used: 15% Disk: /data
10.6.2.38
Total disk: 394G Used: 117G Free: 257G Percent used: 32% Disk: /data
10.6.2.43
Total disk: 394G Used: 77G Free: 318G Percent used: 20% Disk: /data
10.6.2.37
Total disk: 394G Used: 124G Free: 250G Percent used: 34% Disk: /data
10.6.2.39
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
47G 16G 29G 36% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/xvda1 485M 31M 429M 7% /boot
/dev/mapper/VolGroup01-Data
394G 143G 232G 39% /data
10.6.2.40
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
47G 12G 33G 27% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/xvda1 485M 31M 429M 7% /boot
/dev/mapper/VolGroup01-Data
394G 53G 322G 15% /data
10.6.2.42
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
47G 9.3G 36G 21% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/xvda1 485M 31M 429M 7% /boot
/dev/mapper/VolGroup01-Data
394G 108G 286G 28% /data
10.6.2.43
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
47G 13G 32G 29% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/xvda1 485M 31M 429M 7% /boot
/dev/mapper/VolGroup01-Data
394G 79G 316G 20% /data
10.6.2.38
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
47G 20G 25G 45% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/xvda1 485M 31M 429M 7% /boot
/dev/mapper/VolGroup01-Data
394G 117G 257G 32% /data
10.6.2.37
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
47G 12G 33G 27% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/xvda1 485M 31M 429M 7% /boot
/dev/mapper/VolGroup01-Data
394G 125G 250G 34% /data
Link to collect info of all nodes https://s3.amazonaws.com/packages.couchbase/collect_info/orange/2_0_0/201210/8nodes-col-info-toy-build-200-1003-write-commit-failed-20121005-143125.tgz
Link to memcahced logs of all nodes https://s3.amazonaws.com/packages.couchbase/memcached/orange/2_0_0/201210/6nodes-toybuild-200-1003-memcached-timout-status71-20121005-152220.tgz