fuse_kafka
README.md
00001 [![Build Status](https://travis-ci.org/yazgoo/fuse_kafka.svg?branch=master)](https://travis-ci.org/yazgoo/fuse_kafka)
00002 [![Build Status](https://api.shippable.com/projects/549439afd46935d5fbc0a9cf/badge?branchName=master)](https://app.shippable.com/projects/549439afd46935d5fbc0a9cf/builds/latest)
00003 [![Coverage Status](https://img.shields.io/coveralls/yazgoo/fuse_kafka.svg)](https://coveralls.io/r/yazgoo/fuse_kafka?branch=master)
00004 [![Gitter](http://img.shields.io/badge/gitter-join chat-1dce73.svg?style=flat)](https://gitter.im/yazgoo/fuse_kafka?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
00005 [![Documentation](http://img.shields.io/badge/doc-%E2%9C%93-blue.svg?style=flat)](http://yazgoo.github.io/fuse_kafka/html/)
00006 [![Open Build Service](http://img.shields.io/badge/install-packages-yellow.svg?style=flat)](http://software.opensuse.org/download.html?project=home%3Ayazgoo&package=fuse_kafka)
00007 [![Benchmarks](http://img.shields.io/badge/benchs-bonnie++-949494.svg?style=flat)](http://htmlpreview.github.io/?https://raw.githubusercontent.com/yazgoo/fuse_kafka/master/benchs/benchmarks.html#5)
00008 
00009 [![Docker](http://dockeri.co/image/yazgoo/fuse_kafka)](https://registry.hub.docker.com/u/yazgoo/fuse_kafka/)
00010 
00011 ![fuse kafka logo](https://raw.githubusercontent.com/yazgoo/fuse_kafka/master/graphics/fuse_kafka_logo.png "Logo")
00012 
00013 Intercepts all writes to specified directories and send them 
00014 to apache kafka brokers.  Quite suited for log centralization.
00015 
00016 Installing
00017 ==========
00018 
00019 Packages for various distros can be installed from [these repositories](http://download.opensuse.org/repositories/home:/yazgoo/) at [openSUSE Build Service](https://build.opensuse.org/package/show/home:yazgoo/fuse_kafka).
00020 
00021 The following should install the new repositories then install fuse\_kafka:
00022 
00023     # curl -O \
00024         https://raw.githubusercontent.com/yazgoo/fuse_kafka/master/setup.sh \
00025         && md5sum -c <(echo "99c26578e926eb807a02d7a22c6c2e82  setup.sh") \
00026         && chmod +x setup.sh && ./setup.sh
00027 
00028 (for more options - e.g. to install on a machine with no access to the repos - see 'setup.sh options' section)
00029 
00030 Configuration
00031 =============
00032 
00033 A default configuration file is available in conf/fuse\_kafka.properties.
00034 An explanation for each parameter is available in this file.
00035 The packages should install it in /etc/fuse\_kafka.conf.
00036 
00037 Quickstart (from sources)
00038 =========================
00039 
00040 [Here is a capture of a quickstart](http://playterm.org/r/fusekafka-quickstart-1416935569).
00041 ([Download it](http://abdessel.iiens.net/fuse_kafka/ttyrecord) - use ttyplay to view)
00042 
00043 
00044 
00045 If you want to test fuse\_kafka, using a clone of this repository.
00046 
00047 On Debian and Ubuntu, you should install the following:
00048 
00049  *  librdkafka-dev
00050  *  librdkafka1
00051  *  libzookeeper-mt-dev
00052  *  libzookeeper-mt2
00053  *  libjansson-dev
00054  *  libjansson4
00055  *  python
00056 
00057 First, build it:
00058 
00059     $ ./build.py
00060 
00061 On another terminal session, start zookeeper (this will download kafka):
00062 
00063     $ ./build.py zookeeper_start
00064 
00065 On another one, start kafka:
00066 
00067     $ ./build.py kafka_start
00068 
00069 The default configuration is conf/fuse\_kafka.properties.
00070 An important piece of configuration is fuse\_kafka\_directories:
00071 
00072     $ grep fuse_kafka_directories conf/fuse_kafka.properties -B2
00073 
00074 ````python
00075 # directories fuse_kafka will listen to (launch script will try to
00076 # create them if they don't exist)
00077 fuse_kafka_directories=["/tmp/fuse-kafka-test"]
00078 ````
00079 
00080 Start fuse\_kafka using the init script:
00081 
00082     $ src/fuse_kafka.py start
00083 
00084 If you're not running as root, you might have to make 
00085 /etc/fuse.conf readable by your user (here to all users):
00086 
00087     $ chmod a+r /etc/fuse.conf
00088 
00089 And allow non-root user to specify the allow\_other option, by adding
00090 a line with user\_allow\_other in /etc/fuse.conf.
00091 
00092 If fuse\_kafka is running, you should get the following output when
00093 running:
00094 
00095     $ src/fuse_kafka.py status
00096     listening on /tmp/fuse-kafka-test
00097     fuse kafka is running
00098 
00099 In yet another terminal, start a test consumer:
00100 
00101     $ ./build.py kafka_consumer_start
00102 
00103 Then start writing to a file under the overlay directory:
00104 
00105     $ bash -c 'echo "foo"' > /tmp/fuse-kafka-test/bar
00106 
00107 You should have an output from the consumer similar to this:
00108 
00109 ````yaml
00110 event:
00111     group: users
00112     uid: 1497
00113     @tags:
00114         -  test
00115     @fields:
00116          hostname: test
00117     @timestamp: 2014-10-03T09:07:04.000+0000
00118     pid: 6485
00119     gid: 604
00120     command: bash -c echo "foo"
00121     @message: foo
00122     path: /tmp/fuse-kafka-test/bar
00123     @version: 0.1.3
00124     user: yazgoo
00125 ````
00126 
00127 When you're done, you can stop fuse\_kafka:
00128 
00129     $ src/fuse_kafka.py stop
00130 
00131 
00132 Using fuse_kafka as a machine tail
00133 ==================================
00134 
00135 If you want to tail all logs from /var/log 
00136 
00137     $ ./build.py
00138     $ LD_LIBRARY_PATH=. ./fuse_kafka -- --directories /var/log --output stdout --input inotify --encoder text
00139 
00140 This is pretty usefull to quickly see what is currently going on on a machine.
00141 
00142 
00143 Quota option test
00144 =================
00145 
00146 First, commment fuse\_kafka\_quota in conf/fuse\_kafka.properties.
00147 Then, start fuse kafka.
00148 
00149     $ src/fuse_kafka.py start
00150 
00151 Let's create a segfaulting program:
00152 
00153     $ cat first.c
00154 
00155 ````c
00156 int main(void)
00157 {
00158     *((int*)0) = 1;
00159 }
00160 ````
00161 
00162 ````shell
00163 $ gcc first.c
00164 ````
00165 
00166 Then start a test consumer, displaying only the path and message\_size-added fields
00167 
00168 Launch the segfaulting program in fuse-kafka-test directory:
00169 
00170 ````shell
00171 $ /path/to/a.out
00172 ````
00173 A new core file should appear in fused directory.
00174 
00175 Here is the consumer output:
00176 
00177 ````shell
00178 $ SELECT="message_size-added path" ./build.py kafka_consumer_start
00179 event:
00180     message_size-added: 4096
00181     path: /tmp/fuse-kafka-test/core
00182 ...
00183 event:
00184     message_size-added: 4096
00185     path: /tmp/fuse-kafka-test/core
00186 ````
00187 
00188 Here we see many messages.
00189 
00190 Then, uncomment fuse\_kafka\_quota in conf/fuse\_kafka.properties and 
00191 launch the segfaulting program,
00192         
00193 ````shell
00194 $ SELECT="message_size-added path" ./build.py kafka_consumer_start
00195 event:
00196     message_size-added: 64
00197     path: /tmp/fuse-kafka-test/core
00198 ````
00199 
00200 This time, we only receive the first write.
00201 
00202 Event format
00203 ============
00204 
00205 We use a logstash event, except the message and command are base64 encoded:
00206 
00207 ````json
00208 {"path": "/var/log/redis_6380.log", "pid": 1262, "uid": 0, "gid": 0,
00209 "@message": "aGVsbG8gd29ybGQ=",
00210 "@timestamp": "2014-09-11T14:19:09.000+0000","user": "root", "group":
00211 "root",
00212 "command": "L3Vzci9sb2NhbC9iaW4vcmVkaXMtc2VydmVyIC",
00213 "@version": "0.1.2",
00214 "@fields": {
00215     "first_field": "first_value",
00216     "second_field": "second_value" },
00217 "@tags": ["mytag"]}
00218 ````
00219 
00220 
00221 Installing from sources
00222 =======================
00223 
00224     # installing prerequisites
00225     $ sudo apt-get install librdkafka-dev libfuse-dev
00226     # building
00227     $ ./build.py 
00228     # testing
00229     $ ./build.py test
00230     # cleaning
00231     $ ./build.py clean
00232     # installing:
00233     $ ./build.py install
00234 
00235     You can add c compiling flags via CFLAGS environment variable:
00236 
00237     $ ./build.py CFLAGS=-Wall ./build.py
00238 
00239 Using valgrind or another dynamic analysis tool
00240 ===============================================
00241 
00242 The start script allows to specify a command to append when actually
00243 launching the binary, FUSE\_KAFKA\_PREFIX, which can be used to
00244 perform analyses while running (like memcheck):
00245     
00246     FUSE_KAFKA_PREFIX="valgrind --leak-check=yes" ./src/fuse_kafka.py start
00247 
00248 
00249 Debugging with gdb
00250 ==================
00251 
00252 You can also debug using FUSE\_KAFKA\_PREFIX, here is how to do so:
00253 
00254     $ echo -e "set follow-fork-mode child\nrun\nwhere" > /tmp/gdb_opts
00255     $ FUSE_KAFKA_PREFIX="gdb -x /tmp/gdb_opts --args" ./src/fuse_kafka.py start
00256 
00257 Anti hanging
00258 ============
00259 
00260 Fuse-kafka must never make your filesystem accesses hang.
00261 Although this should be considered as a major bug, this might happen
00262 since the soft is still young.
00263 You can run a daemon so that any FS hanging
00264 is umounted (the check will occur every minute).
00265 To do so on an installed instance:
00266 
00267     # service fuse_kafka_umounter start
00268 
00269 To do so on a source based instance:
00270 
00271     $ ./src/fuse_kafka.py start
00272 
00273 
00274 setup.sh options
00275 ================
00276 
00277 Here are available options:
00278 
00279  - `-r`: to do a remote install via ssh: `-r private_ssh_key user@host`
00280  - `-d`: download and do not install the packages, generating an archive
00281  - `-f`: install an archive already built via -d: `-f fuse_kafka.tar.bz2`
00282 
00283 For example, this will download packages on a remote server:
00284 
00285 ````shell
00286 $ ./setup.sh -r mykey.pem root@myserver -d
00287 ````
00288 
00289 This will generate an archive that will be copied locally.
00290 You can then install that archive via:
00291 
00292 ````shell
00293 $ ./setup.sh -f fuse_kafka.tar.bz2
00294 ````
00295 
00296 
00297 Networking tests
00298 ================
00299 
00300 A more realistic network setup test can be launched (as root) via:
00301 
00302 ````shell
00303 $ ./build.py
00304 $ sudo ./build.py mininet
00305 ````
00306 
00307 This requires [mininet](http://mininet.org).
00308 
00309 This will launch kafka, zookeeper, fuse_kafka, and a consumer 
00310 on their own mininet virtual hosts with their own network stacks.
00311 fuse_kafka is running on h3 (host number three).
00312 
00313 You can also launch a mininet shell.
00314 For example, if you want to try and write on fuse_kafka host, issue a:
00315 
00316 ````shell
00317 mininet> h3 echo lol > /tmp/fuse-kafka-test/xd
00318 ````
00319 
00320 The consumer log is available via (/tmp/kafka_consumer.log). 
00321 
00322 `quit` or `^D` will stop mininet and cleanup the virtual network.
00323 
00324 To debug, you should start having a look at 
00325 
00326 - /tmp/fuse_kafka.log
00327 - /tmp/zookeeper.log
00328 - /tmp/kafka.log
00329 
00330 Logstash input plugin
00331 =====================
00332 
00333 A logstash input plugin to read from kafka is available in src/logstash/inputs/kafka.rb
00334 
00335 Provided you have kafka installed in . (which `./build.py kafka_start` should do),
00336 you can try it by downloading logastash and running:
00337 
00338 ````shell
00339 $ /path/to/bin/logstash -p ./src/ -f ./conf/logstash.conf
00340 ````
00341 
00342 Unit testing
00343 ============
00344 
00345 To launch unit tests, issue a:
00346 
00347 ````shell
00348     ./build.py test
00349 ````
00350 
00351 C unit test will be launched with gdb.
00352 If you set NO_BATCH environment variable, you will get gdb prompts.
00353 
00354 To test against multiple python versions (provided tox is installed), issue a:
00355 
00356 ````shell
00357 $ tox
00358 ````
00359 
00360 (see .travis.yml `# prerequisites for tox` to install these versions on ubuntu).
00361 
00362 
00363 C Unit testing
00364 ==============
00365 
00366 To run c unit tests, do a:
00367 
00368 ````shell
00369 $ rm -rf out/ ; mkdir -p out/c ; ./build.py compile_test && ./build.py c_test
00370 ````
00371 
00372 
00373 Working with other logging systems
00374 ==================================
00375 
00376 Basically, any process that has a file handle opened before fuse_kafka starts
00377 won't have it's writes captured.
00378 Such a process must open a new file handle after fuse_kafka startup,
00379 for example by restarting the process.
00380 
00381 For example, If you're using rsyslogd and it is writing to /var/log/syslog,
00382 after starting fuse_kafka on /var/log, you should issue a:
00383 
00384 ````shell
00385 $ service rsyslogd restart
00386 ````
00387 
00388 After stopping fuse_kafka, you should also restart rsyslogd so 
00389 it re-acquires a file descriptor on the actual FS.
00390 
00391 Benchmarks
00392 ==========
00393 
00394 Provided you have bonnie++ installed, you can run benchmarks with
00395 
00396 ````shell
00397 $ ./build.py bench
00398 ````
00399 
00400 This will generate `bench/results.js`, which you can see via `benchs/benchmarks.html`
00401 
00402 Dynamic configuration
00403 =====================
00404 
00405 You might want to have fuse_kafka start ahead of most processes.
00406 But when it starts, you might not have all its configuration available yet.
00407 Or you might want to add brokers or use new zookeepers.
00408 
00409 Dynamic configuration allows to modify the configuration on the fly.
00410 You will be able to:
00411 
00412 * point to new zookeepers/brokers
00413 * update tags, fields
00414 * modify watched directories
00415 
00416 Just update your configuration, then, issue a:
00417 
00418 ````shell
00419 $ sevice fuse_kafka reload
00420 ````
00421 
00422 Or, if you using the developer version:
00423 
00424 ````shell
00425 ./src/fuse_kafka.py reload
00426 ````
00427 
00428 To use this feature, you must make sure that /var/run/fuse_kafka.args is accessible to fuse_kafka.
00429 
00430 
00431 Input plugin
00432 ============
00433 
00434 You can write your own input plugins in `src/plugins/input`.
00435 An example input plugin is available in `src/plugins/input/example.c`.
00436 A plugin should include:
00437 
00438 ````c
00439 #include <input_plugin.h>
00440 ````
00441 
00442 Its entry point is the function:
00443 
00444 ````c
00445 int input_setup(int argc, char** argv, void* conf)
00446 ````
00447 
00448 With parameters being:
00449 
00450 parameter name | description
00451 ---------------|------------
00452 argc           | number of command line arguments (without arguments given after `--` )
00453 argv           | array of arguments
00454 conf           | parsed configuration based on arguments given after `--` (see config.h)
00455 
00456 Every process watching a given directory must declared itself with:
00457 
00458 ````c
00459 void input_is_watching_directory(char* path)
00460 ````
00461 
00462 
00463 It should output it's data using:
00464 
00465 ````c
00466 void output_write(const char *path, const char *buf,
00467         size_t size, off_t offset)
00468 ````
00469 
00470 With parameters being:
00471 
00472 parameter name | description
00473 ---------------|------------
00474 path           | path of the file where the log line comes from
00475 buf            | buffer containing the log line
00476 size           | size of the log line
00477 offset         | start of the log line in buf
00478 
00479 If you require some library, you should refer to its pkg-config name via the macro:
00480 
00481 ````c
00482 require(your-library)
00483 ````
00484 
00485 You can specify a target plateform regexp pattern if you want, for example:
00486 
00487 ````c
00488 target(.*linux.*)
00489 ````
00490 
00491 Will only build for linux. If not specified, the plugin will 
00492 be built for all target.
00493 
00494 
00495 Input plugin unit testing
00496 =========================
00497 
00498 Each input plugin should have a unit test (with the suffix `_test`).
00499 
00500 For example, `src/plugins/input/overlay.c` has a unit test 
00501 `src/plugins/input/overlay_test.c`
00502 
00503 As for the rest of the project, we use minunit for that.
00504 Just include `minuti.h` and your plugin source.
00505 Define your unit test functions, as for example:
00506 
00507 ```C
00508 static char* test_something()
00509 {
00510     /*...*/
00511     mu_assert("42 is 42", 42 == 42);
00512     /*...*/
00513     return 0;
00514 }
00515 ````
00516 
00517 And then define an `all_test()` function calling all tests
00518 
00519 ````C
00520 static char* all_tests()
00521 {
00522     mu_run_test(test_something);
00523     mu_run_test(test_something_else);
00524     return 0;
00525 }
00526 ````
00527 
00528 and then
00529 
00530 ````C
00531 #include "minunit.c"
00532 ````
00533 Also, you should exclude your test file from code coverage, using:
00534 
00535 ````C
00536 // LCOV_EXCL_START
00537 /* code to exclude */
00538 // LCOV_EXCL_STOP
00539 ````
00540 
00541 
00542 Output plugin
00543 =============
00544 
00545 You can write your own output plugins in `src/plugins/output`.
00546 An example input plugin is available in `src/plugins/output/stdout.c`.
00547 A plugin should include:
00548 
00549 ````c
00550 #include <output.h>
00551 ````
00552 
00553 it must define the following function:
00554 
00555 ````c
00556 int output_setup(kafka_t* k, config* fk_conf)
00557 ````
00558 
00559 It must set r->rkt to 1 upon success;
00560 
00561 It must also define:
00562 
00563 ````c
00564 int output_send(kafka_t* k, char* buf, size_t len)
00565 ````
00566 
00567 It can define:
00568 
00569 ````c
00570 void output_clean(kafka_t* k)
00571 int output_update(kafka_t* k)
00572 ````
00573 
00574 Unit testing is done the same way as for input plugin.
00575 
00576 
00577 Write tests
00578 ===========
00579 
00580 To compare inotify output plugin with overlay output plugin, run:
00581 
00582     rm -rf /tmp/zookeeper /tmp/kafka-logs;./build.py write_tests
00583 
00584 This will generate two files, /tmp/write_tests.overlay 
00585 and /tmp/write_tests.inotify with the writes received by kafka.
00586 
00587 This uses the file write_test.rb
00588 
00589 Auditd
00590 ======
00591 
00592 Maybe you are using auditd and you are logging the accesses to audit.log.
00593 Before starting fuse_kafka, the init script issues a:
00594 
00595     auditctl -A exit,never -F path=/var/log/audit/audit.log -F perm=r -F pid=$pid
00596 
00597 Which will disable such logging for fuse_kafka so there is no "audit flood".
00598 
00599 RPM
00600 ===
00601 
00602 You can generate an rpm (provided rpm-build is installed) via:
00603 
00604     ./build.py rpm
00605 
00606 This will create a $HOME/rpmbuild directory, and generate the rpm in
00607 /root/rpmbuild/RPMS.
00608 
00609 Status
00610 ======
00611 
00612 To create a status per mount point, here is how we procede:
00613 
00614 * Each fuse_kafka process writes its pid and the directories it
00615 watches in /var/run/fuse_kafka/watched
00616 
00617 For example, if pid #1649 is watching /var/log, the following file
00618 will be generated:
00619 
00620     /var/run/fuse_kafka/watched/var/log/1649.pid
00621 
00622 To list watched directories, fuse_kafka.py list such files, and checks if
00623 fuse_kafka is running with such a pid.
00624 
00625 If there is no process running or the process is not fuse_kafka, the
00626 .pid will be deleted.
00627 
00628 
00629 Queue
00630 =====
00631 
00632 When the output plugin is not initialized, some events may be lost.
00633 A queue was added to store events and to send them as soon as
00634 the output gets initialized (see queue.c and output_write in output.c).
00635 
00636 Windows (mingw)
00637 ===============
00638 
00639 You need to build jansson, zookeeper and librdkafka separately
00640 Then:
00641 
00642     cd src
00643     ln -s  ../../win32/zookeeper-3.4.6/src/c/include zookeeper
00644     ln -s  ../../librdkafka/src librdkafka
00645     cd -
00646     CC=x86_64-w64-mingw32-gcc CFLAGS="-I../win32/dlfcn-win32 -I../win32/zookeeper-3.4.6/src/c/include -I../win32/zookeeper-3.4.6/src/c/generated -I../win32/jansson-2.4/src -DMINGW_VER -D_X86INTRIN_H_INCLUDED" LDFLAGS="-L../win32/jansson-2.4/src -w -L../librdkafka/src -L../win32/zookeeper-3.4.6/src/c/.libs -L/home/yazgoo/dev/win32/jansson-2.4/src/.libs/ -L../win32/dlfcn-win32 -L../win32/zlib-1.2.8/" LIBS="-lws2_32 -lpsapi" ./build.py
00647 
00648 For testing purposes, you can run fuse_kafka with wine:
00649 
00650     ln -s fuse_kafka fuse_kafka.exe
00651     cp /usr/x86_64-w64-mingw32/lib/libwinpthread-1.dll  .
00652     cp ../librdkafka/src/librdkafka.so.1 .
00653     cp /usr/lib/gcc/x86_64-w64-mingw32/4.8/libgcc_s_sjlj-1.dll .
00654     FUSE_KAFKA_PREFIX=wine ./src/fuse_kafka.py start
00655 
00656 
00657 Tail
00658 ====
00659 
00660 It is possible to tail events in kafka using logstash, by doing:
00661 
00662 ````sh
00663 https_proxy=http://user:password@host:port FUSE_KAFKA_ZK_CONNECT="your zk address" ./build.py tail
00664 ````
00665 
00666 This will download logstash, and launch src/logstash/inputs/fuse_kafka.rb,
00667      using conf/logstash.conf as configuration.
00668 
00669 
00670 
00671 Generating self contained archive from sources
00672 ==============================================
00673 
00674 You can generate an archive with all dependencies with binary_archive target.
00675 
00676 For example, to generate an archive for windows, building with mingw:
00677 
00678     SRCROOT=/tmp/sources BUILDROOT=/tmp/output CXX=x86_64-w64-mingw32-g++ CC=x86_64-w64-mingw32-gcc CFLAGS="-I$PWD/../out/include -DMINGW_VER -D_X86INTRIN_H_INCLUDED -DWIN32 -DNDEBUG -D_WINDOWS -D_USRDLL -DZOOKEEPER_EXPORTS -DDLL_EXPORT -w -fpermissive -D_X86INTRIN_H_INCLUDED -DLIBRDKAFKA_EXPORTS -DInterlockedAdd=_InterlockedAdd -DMINGW_VER -D_WIN32_WINNT=0x0760" LDFLAGS="-L$PWD/../out/lib" LIBS="-lwsock32 -lws2_32 -lpsapi" archive_cmds_need_lc=no LDSHAREDLIBC= ./build.py binary_archive
00679 
00680     SRCROOT=/tmp/lolo BUILDROOT=$PWD/../out/ CXX=x86_64-w64-mingw32-g++ CC=x86_64-w64-mingw32-gcc CFLAGS="-I$PWD/../out/include -DMINGW_VER -D_X86INTRIN_H_INCLUDED -DWIN32 -DNDEBUG -D_WINDOWS -D_USRDLL -DZOOKEEPER_EXPORTS -DDLL_EXPORT -w -fpermissive -D_X86INTRIN_H_INCLUDED -DLIBRDKAFKA_EXPORTS -DInterlockedAdd=_InterlockedAdd -DMINGW_VER -D_WIN32_WINNT=0x0760" LDFLAGS="-L$PWD/../out/lib -Xlinker --no-undefined -Xlinker --enable-runtime-pseudo-reloc" LIBS="-lwsock32 -lws2_32 -lpsapi" archive_cmds_need_lc=no LDSHAREDLIBC= ./build.py binary_archive
00681 
00682 This will:
00683 
00684 1. download source dependencies into SRCROOT
00685 1. build them and install them in BUILDROOT
00686 1. add additional libraries from wine if we're building for windows
00687 1. download python if we're building for windows
00688 1. create an archive in __../fuse_kafka-$version-bin.tar.gz__
00689 
00690 
00691 Verbose tracing
00692 ===============
00693 
00694 You can enable verbose mode via
00695 
00696     CFLAGS="-DFK_DEBUG" ./build.py
00697 
00698 
00699 Encoding
00700 ========
00701 
00702 You can specify how you want the data written to your output.
00703 See fuse_kafka.properties for possible values.
00704 
00705 Docker environment
00706 ==================
00707 
00708 1. Shippable generates https://registry.hub.docker.com/u/yazgoo/fuse_kafka/
00709 1. For developing, the advised dockerfile is docker/homeship.dockerfile (generated with https://github.com/yazgoo/homeship)
00710 1. For rpm building, the advised dockerfile is docker/rpmbuild.dockerfile
00711 
00712 
00713 Zookeeper multithreaded or not
00714 ==============================
00715 
00716 You can use either zookeeper_mt (zookeeper multithread, default) or not (zookeeper_st single threaded).
00717 To use the single threaded version, just set `zookeeper_st=y` environment variable.
00718 
00719 Licensing
00720 =========
00721 
00722 licensed under Apache v 2.0, see LICENSE file
00723 
00724 
00725 Tagging
00726 =======
00727 
00728 - for versions, use github release tags, for example 0.1.4
00729 - for OBS source release number, we use ligthweight tag: for example the package 
00730     fuse_kafka-0.1.4-20.1.x86_64.rpm will have a tag 0.1.4-20 (20 being the release number)
00731 
00732 
00733 Version
00734 =======
00735 
00736 To get the version of fuse_kafka you're running, just issue a:
00737 
00738     fuse_kafka -- --version
00739 
00740 Code of conduct
00741 ===============
00742 
00743 Please note that this project is released with a Contributor Code of Conduct.
00744 By participating in this project you agree to abide by its terms.
00745 See CODE OF CONDUCT file.
00746 
 All Data Structures Files Functions Variables Defines