학습자료/tools 2013. 4. 22. 09:51

간단 설치 방법 : http://ceph.com/docs/master/install/rpm/


  • 각 노드에 설치

    rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
    rpm -Uvh http://ceph.com/rpm-bobtail/el6/x86_64/ceph-release-1-0.el6.noarch.rpm
    yum install ceph



  • 확인

ceph -v

결과 : ceph version 0.56.4 ~


  • configuration (/etc/ceph/ceph.conf)
[global]

	# For version 0.55 and beyond, you must explicitly enable 
	# or disable authentication with "auth" entries in [global].
	
	auth cluster required = cephx
	auth service required = cephx
	auth client required = cephx

[osd]
	osd journal size = 1000
	
	#The following assumes ext4 filesystem.
	filestore xattr use omap = true


	# For Bobtail (v 0.56) and subsequent versions, you may 
	# add settings for mkcephfs so that it will create and mount
	# the file system on a particular OSD for you. Remove the comment `#` 
	# character for the following settings and replace the values 
	# in braces with appropriate values, or leave the following settings 
	# commented out to accept the default values. You must specify the 
	# --mkfs option with mkcephfs in order for the deployment script to 
	# utilize the following settings, and you must define the 'devs'
	# option for each osd instance; see below.

	#osd mkfs type = {fs-type}
	#osd mkfs options {fs-type} = {mkfs options}   # default for xfs is "-f"	
	#osd mount options {fs-type} = {mount options} # default mount option is "rw,noatime"

	# For example, for ext4, the mount option might look like this:
	
	#osd mkfs options ext4 = user_xattr,rw,noatime

	# Execute $ hostname to retrieve the name of your host,
	# and replace {hostname} with the name of your host.
	# For the monitor, replace {ip-address} with the IP
	# address of your host.

[mon.a]

	host = {hostname}
	mon addr = {ip-address}:6789

[osd.0]
	host = {hostname}
	
	# For Bobtail (v 0.56) and subsequent versions, you may 
	# add settings for mkcephfs so that it will create and mount
	# the file system on a particular OSD for you. Remove the comment `#` 
	# character for the following setting for each OSD and specify 
	# a path to the device if you use mkcephfs with the --mkfs option.
	
	#devs = {path-to-device}

[osd.1]
	host = {hostname}
	#devs = {path-to-device}

[mds.a]
	host = {hostname}
  1. Open a command line on your Ceph server machine and execute hostname -s to retrieve the name of your Ceph server machine.

  2. Replace {hostname} in the sample configuration file with your host name.

  3. Execute ifconfig on the command line of your Ceph server machine to retrieve the IP address of your Ceph server machine.

  4. Replace {ip-address} in the sample configuration file with the IP address of your Ceph server host.

  5. Save the contents to /etc/ceph/ceph.conf on Ceph server host.

  6. Copy the configuration file to /etc/ceph/ceph.conf on your client host.


  • 각 노드에 config 복사

chmod 644 ceph.conf

scp {user}@{server-machine}:/etc/ceph/ceph.conf /etc/ceph/ceph.conf

 
  • 호스트 등록
    vi /etc/hosts

각 노드의 hostname -s 결과와 ip를 기입해 둔다.


  • 각 데몬이 사용할 장소 지정

mkdir -p /var/lib/ceph/osd/ceph-0 mkdir -p /var/lib/ceph/osd/ceph-1 mkdir -p /var/lib/ceph/mon/ceph-a mkdir -p /var/lib/ceph/mds/ceph-a

※osd 쪽은 new disk를 마운트 시키자. 그냥 하다가 저널링 걸리면 파일시스템 어찌 될지 모름.


 

  • v.56 이후의 bobtail에서 설정 (용도를 정확히는 모르겠다.)

mon 에서 작업(통합 컨트롤은 mon에서 한다?)

cd /etc/ceph mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring

Among other things, mkcephfs will deploy Ceph and generate a client.admin user and key. For Bobtail and subsequent versions (v 0.56 and after), the mkcephfs script will create and mount the filesystem for you provided you specify osd mkfs osd mount and devs settings in your Ceph configuration file.



※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※

에러) 2013-04-22 13:39:48.909389 7fe39ca14760 -1 provided osd id 0 != superblock's -1
2013-04-22 13:39:48.910138 7fe39ca14760 -1 ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-0: (22) Invalid argument
failed: 'ssh root@ds1 /sbin/mkcephfs -d /tmp/mkfs.ceph.a06aefb0287e9bb354b62ba535


해결) 실행 중 잘못 세팅한 설정으로 실패할 경우, /var/lib/ceph/의 모든 파일을 지우고 해야 한다.

리로드 할때 이곳의 파일들을 지우고 하지 않기에 수동으로 삭제 후, 재시도 해야 한다.

※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※※
  • 시작

service ceph -a start (allhosts)

    일부 시작 - ceph start osd.0


  • 정상 동작 확인

ceph health 예상 결과 : HEALTH_OK



  • 클라이언트 마운트


yum install ceph-fuse

ceph-fuse -m 192.168.2.13:6789 /mnt/cephClient/


원래는 mount -t ceph 192.168.2.13:6789 /mnt/cephClient/ 로 하지만

mount.ceph: modprobe failed, exit status 1
mount error: ceph filesystem not supported by the system
에러가 발생한다.


cent에는 ceph이 쓰는 modprobe가 없어서 커널을 재 컴파일해야 한다.

대신 yum install ceph-fuse를 설치하여 fuse를 이용한다.






  • 설치 시, 각 장치에 띄어지는 프로세스

ps aux | grep ceph


[결과]

mon

/usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf


[mds]

root      3342  0.1  0.1 1090944 7916 ?        Ssl  03:13   0:00 /usr/bin/ceph-mds -i a --pid-file /var/run/ceph/mds.a.pid -c /tmp/ceph.conf.d6ddf446d1563675cb148db3eb3ca5f0


[osd.0]

root      3223  0.1  0.2 2486928 38388 ?       Ssl  17:25   0:03 /usr/bin/ceph-osd -i 0 --pid-file /var/run/ceph/osd.0.pid -c /tmp/ceph.conf.17f3c04281dcf4407506b6e8c67ad89b


[osd.1]

root      2816  0.0  0.2 2486796 41292 ?       Ssl  17:29   0:02 /usr/bin/ceph-osd -i 1 --pid-file /var/run/ceph/osd.1.pid -c /tmp/ceph.conf.839e7a5eeec05fb3198260a93aea402a

 

posted by cozyboy
: