zeromq的安装,部署(号称最快的消息队列,消息中间件)
1:Storm作为一个实时处理的框架,产生的消息需要快速的进行处理,比如存在消息队列ZeroMQ里面。 由于消息队列ZeroMQ是C++写的,而我们的程序是运行在JVM虚拟机里面的。所以需要jzmq这个桥梁来黏合C++程序接口和Java程序接口。
ZeroMQ的官方网址:http://zeromq.org/
1:MetaQ(全称Metamorphosis)是一个高性能、高可用、可扩展的分布式消息中间件,思路起源于LinkedIn的Kafka,但并不是Kafka的一个Copy。MetaQ具有消息存储顺序写、吞吐量大和支持本地和XA事务等特性,适用于大吞吐量、顺序消息、广播和日志数据传输等场景,目前在淘宝和支付宝有着广泛的应用。
2:MetaQ概念
Producer (消息生产者)
Consumer (消息消费者)
Topic (消息的主题)
Partition (分区)
Message (消息)
Broker (就是meta的服务端)
Group (消费者分组)
Offset (消息偏移量)
3:下载地址 http://fnil.net
GitHub地址 https://github.com/killme2008/Metamorphosis
2:ZeroMQ的安装过程如下所示(首先将zeromq-2.1.7.tar.gz上传到自己的虚拟机里面,过程省略):
然后进行解压缩操作,如下所示:
[root@slaver1 package]# tar -zxvf zeromq-2.1.7.tar.gz -C /home/hadoop/soft/
解压缩以后,由于解压缩以后是源码,所以需要编译,编译以后再进行安装操作:
然后使用此命令对编译环境进行检测:
[root@slaver1 zeromq-2.1.7]# ./configure
然后出现如下错误,没有找到C++编译器,所以现在安装C++编译器:
1 checking for a BSD-compatible install... /usr/bin/install -c 2 checking whether build environment is sane... yes 3 checking for a thread-safe mkdir -p... /bin/mkdir -p 4 checking for gawk... gawk 5 checking whether make sets $(MAKE)... yes 6 checking how to create a ustar tar archive... gnutar 7 checking for gcc... gcc 8 checking whether the C compiler works... yes 9 checking for C compiler default output file name... a.out 10 checking for suffix of executables... 11 checking whether we are cross compiling... no 12 checking for suffix of object files... o 13 checking whether we are using the GNU C compiler... yes 14 checking whether gcc accepts -g... yes 15 checking for gcc option to accept ISO C89... none needed 16 checking for style of include used by make... GNU 17 checking dependency style of gcc... gcc3 18 checking for gcc option to accept ISO C99... -std=gnu99 19 checking for g++... no 20 checking for c++... no 21 checking for gpp... no 22 checking for aCC... no 23 checking for CC... no 24 checking for cxx... no 25 checking for cc++... no 26 checking for cl.exe... no 27 checking for FCC... no 28 checking for KCC... no 29 checking for RCC... no 30 checking for xlC_r... no 31 checking for xlC... no 32 checking whether we are using the GNU C++ compiler... no 33 checking whether g++ accepts -g... no 34 checking dependency style of g++... none 35 checking whether gcc -std=gnu99 and cc understand -c and -o together... yes 36 checking for a sed that does not truncate output... /bin/sed 37 checking for gawk... (cached) gawk 38 checking for xmlto... no 39 checking for asciidoc... no 40 checking build system type... i686-pc-linux-gnu 41 checking host system type... i686-pc-linux-gnu 42 checking for a sed that does not truncate output... (cached) /bin/sed 43 checking for grep that handles long lines and -e... /bin/grep 44 checking for egrep... /bin/grep -E 45 checking for fgrep... /bin/grep -F 46 checking for ld used by gcc -std=gnu99... /usr/bin/ld 47 checking if the linker (/usr/bin/ld) is GNU ld... yes 48 checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B 49 checking the name lister (/usr/bin/nm -B) interface... BSD nm 50 checking whether ln -s works... yes 51 checking the maximum length of command line arguments... 1966080 52 checking whether the shell understands some XSI constructs... yes 53 checking whether the shell understands "+="... yes 54 checking for /usr/bin/ld option to reload object files... -r 55 checking for objdump... objdump 56 checking how to recognize dependent libraries... pass_all 57 checking for ar... ar 58 checking for strip... strip 59 checking for ranlib... ranlib 60 checking command to parse /usr/bin/nm -B output from gcc -std=gnu99 object... ok 61 checking how to run the C preprocessor... gcc -std=gnu99 -E 62 checking for ANSI C header files... yes 63 checking for sys/types.h... yes 64 checking for sys/stat.h... yes 65 checking for stdlib.h... yes 66 checking for string.h... yes 67 checking for memory.h... yes 68 checking for strings.h... yes 69 checking for inttypes.h... yes 70 checking for stdint.h... yes 71 checking for unistd.h... yes 72 checking for dlfcn.h... yes 73 checking whether we are using the GNU C++ compiler... (cached) no 74 checking whether g++ accepts -g... (cached) no 75 checking dependency style of g++... (cached) none 76 checking for objdir... .libs 77 checking if gcc -std=gnu99 supports -fno-rtti -fno-exceptions... no 78 checking for gcc -std=gnu99 option to produce PIC... -fPIC -DPIC 79 checking if gcc -std=gnu99 PIC flag -fPIC -DPIC works... yes 80 checking if gcc -std=gnu99 static flag -static works... no 81 checking if gcc -std=gnu99 supports -c -o file.o... yes 82 checking if gcc -std=gnu99 supports -c -o file.o... (cached) yes 83 checking whether the gcc -std=gnu99 linker (/usr/bin/ld) supports shared libraries... yes 84 checking whether -lc should be explicitly linked in... no 85 checking dynamic linker characteristics... GNU/Linux ld.so 86 checking how to hardcode library paths into programs... immediate 87 checking whether stripping libraries is possible... yes 88 checking if libtool supports shared libraries... yes 89 checking whether to build shared libraries... yes 90 checking whether to build static libraries... yes 91 checking whether the C compiler works... yes 92 checking whether we are using Intel C compiler... no 93 checking whether we are using Sun Studio C compiler... no 94 checking whether we are using clang C compiler... no 95 checking whether we are using gcc >= 4 C compiler... yes 96 checking whether the C++ compiler works... no 97 configure: error: Unable to find a working C++ compiler
安装C++编译器,如果有网络的话,可以使用yum进行安装,如果没有网络需要提前下载好,由于前段时间重装系统了,所有都是从头开始,你懂的,然后我的网络又坏了,日了狗了,我的问题是,本机可以ping通虚拟机,但是虚拟机ping不通本地,很显然就是DNS出现问题了,那么说一下我是如何解决的:
然后呢,配置一下自己的虚拟机的网络编辑器,本机NET模式的DNS和虚拟机的NET模式的DNS要一致:
可以上网的情况下:
[root@slaver1 hadoop]# yum install gcc-c++
离线安装如下所示:
rpm -i libstdc++-devel-4.4.7-3.el6.x86_64.rpm rpm -i gcc-c++-4.4.7-3.el6.x86_64.rpm rpm -i libuuid-devel-2.17.2-12.9.el6.x86_64.rpm
然后呢,再去zeroMQ目录下面对编译环境进行检测:
[root@slaver1 zeromq-2.1.7]# ./configure
然后呢,出现如下所示的问题,最后一行如下所示的问题,这是需要一些依赖,所以接下来解决这个问题:
configure: error: cannot link with -luuid, install uuid-dev.
手动安装这个依赖,解决问题,解决完问题再去ZeroMQ目录下面检测环境正常即可:
1 [root@slaver1 rpms-32]# rpm -ivh libuuid-devel-2.17.2-12.9.el6.i686.rpm 2 Preparing... ########################################### [100%] 3 1:libuuid-devel ########################################### [100%]
如果正常,最后一行如下所示:
config.status: executing libtool commands
然后开始进行编译操作,如下所示,在ZeroMQ目录下面输入make命令,进行编译操作:
[root@slaver1 zeromq-2.1.7]# make
编译以后进行安装,如下所示,在ZeroMQ目录下面输入make install命令,进行安装操作:
[root@slaver1 zeromq-2.1.7]# make install
至此ZeroMQ安装结束,接下来安装 jzmq(Java调用C++的桥梁),如下所示:
[root@slaver1 package]# unzip jzmq-master.zip -d /home/hadoop/soft/
然后生成configure命令。操作如下所示,会提示你缺少libtool这个包,然后现在安装一下这个包:
[root@slaver1 jzmq-master]# ./autogen.sh autogen.sh: error: could not find libtool. libtool is required to run autogen.sh.
然后呢,安装libtool出现的问题以及解决方法如下所示:
1 [root@slaver1 rpms-32]# rpm -ivh libtool-2.2.6-15.5.el6.i686.rpm 2 error: Failed dependencies: 3 autoconf >= 2.58 is needed by libtool-2.2.6-15.5.el6.i686 4 automake >= 1.4 is needed by libtool-2.2.6-15.5.el6.i686 5 [root@slaver1 rpms-32]# rpm -ivh autoconf-2.63-5.1.el6.noarch.rpm 6 Preparing... ########################################### [100%] 7 1:autoconf ########################################### [100%] 8 [root@slaver1 rpms-32]# rpm -ivh automake-1.11.1-4.el6.noarch.rpm 9 Preparing... ########################################### [100%] 10 1:automake ########################################### [100%] 11 [root@slaver1 rpms-32]# rpm -ivh libtool-2.2.6-15.5.el6.i686.rpm 12 Preparing... ########################################### [100%] 13 1:libtool ########################################### [100%] 14 [root@slaver1 rpms-32]#
然后呢,再去jzmq目录下面运行如下脚本生成configure命令:
[root@slaver1 jzmq-master]# ./autogen.sh
然后使用configure命令对编译环境进行检测:
[root@slaver1 jzmq-master]# ./configure
然后进行编译操作,如下所示:
[root@slaver1 jzmq-master]# make
然后进行安装操作,如下所示:
[root@slaver1 jzmq-master]# make install
3:由于Storm的脚本是Python写的,所以需要安装一下Python这个运行环境,又因为Centos6.4版本已经安装了python,所以就不需要进行安装了哦:
[root@slaver1 jzmq-master]# which python
/usr/bin/python
查看python的版本,如下所示:
[root@slaver1 jzmq-master]# python -V Python 2.6.6
4:然后呢,开始安装Storm,由于之前写过了,这里就不再叙述了;
待续……