Welcome to zewaren.net. This site presents myself and mostly archives the solutions to some problems I once had.

How to stop the watchdog timer of a BeableBone Black running Linux

Not so frequently asked questions and stuff: 

A Beable Bone Black with an operator setting its watchdog timer

The BeagleBone Black's SoC (AM335x) includes a watchdog timer, that will reset the whole board is it isn't pingged regularly.

Let's see if we can stop that thing running the latest Debian GNU/Linux to date.

# uname -a
Linux beaglebone 4.4.9-ti-r25 #1 SMP Thu May 5 23:08:13 UTC 2016 armv7l GNU/Linux
root@beaglebone:~# cat /etc/debian_version
8.4

Ever since this commit, the OMAP watchdog driver has the magic close feature enabled. This means that closing the timer's device won't stop the timer from ticking. The only way to stop it is to send to it the magic character 'V' (a capital 'v').

# wdctl /dev/watchdog
wdctl: write failed: Invalid argument
Device:        /dev/watchdog
Identity:      OMAP Watchdog [version 0]
Timeout:       120 seconds
Timeleft:      119 seconds
FLAG           DESCRIPTION               STATUS BOOT-STATUS
KEEPALIVEPING  Keep alive ping reply          0           0
MAGICCLOSE     Supports magic close char      0           0
SETTIMEOUT     Set timeout (in seconds)       0           0

This feature is particularly useful if you want the watchdog timer to only be active when a specific application is running, and if you then want it to be stopped when the application is stopped normally.

Unfortunately, the kernel can be configure with a mode called "no way out", which means that even tough the magic close feature of the driver is enabled, it won't be honored at all, and you are doomed to ping your timer until the end of time once you opened the device.

# cat /proc/config.gz | gunzip | grep CONFIG_WATCHDOG_NOWAYOUT
CONFIG_WATCHDOG_NOWAYOUT=y

On a kernel version 3.8, the feature was not enabled:

$ cat /proc/config.gz | gunzip | grep CONFIG_WATCHDOG_NOWAYOUT
# CONFIG_WATCHDOG_NOWAYOUT is not set

So, how do we stop that thing?

Well, you can see in the code of the driver that the default value of the kernel can be overridden by a module param:

static bool nowayout = WATCHDOG_NOWAYOUT;
module_param(nowayout, bool, 0);
MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started "
	"(default=" __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");

Edit the boot configuration in /boot/uEnv.txt and add set that parameter to 0 in the cmdline:

cmdline=coherent_pool=1M quiet cape_universal=enable omap_wdt.nowayout=0

Reboot the board, and check that the loaded command line was changed correctly:

# cat /proc/cmdline
console=tty0 console=ttyO0,115200n8 root=/dev/mmcblk0p1 rootfstype=ext4 rootwait coherent_pool=1M quiet cape_universal=enable omap_wdt.nowayout=0

That's it. Now if you send a 'V' to the watchdog right before closing it, it will be stopped.

How to debug python code using GDB on FreeBSD without compromising your system

Not so frequently asked questions and stuff: 

GDB's logoImageThe FreeBSD logo

Introduction

We want to be able to use GDB to debug python code efficiently.

Let's say we have the following code:

from threading import Event
import random
from time import sleep

def blocking_function_one():
	while True:
		sleep(1)

def blocking_function_two():
	e = Event()
	e.wait()

if random.random() > 0.5:
	blocking_function_one()
else:
	blocking_function_two()

That code will block, and since it doesn't output anything, we have no way of knowing if we went into blocking_function_one or blocking_function_two. Or do we?

For reference, I'm running a 10.2-RELEASE:

# uname -a
FreeBSD bsdlab 10.2-RELEASE FreeBSD 10.2-RELEASE #0 r286666: Wed Aug 12 15:26:37 UTC 2015     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

Step 1: installing a debug version of python

We're going to work in a separate directory, in order not to alter our installation. If we need this in production, we want to be able to let the system as clean as possible when we leave.

# mkdir /usr/local/python-debug

Build python 3.4 from the port collection and set it to be installed in the directory we just created:

# cd /usr/ports/lang/python34
# make install PREFIX=/usr/local/python-debug OPTIONS_FILE_SET+=DEBUG BATCH=1

Normally we would have used NO_PKG_REGISTER=1 to install the package without registering it on the system. Unfortunately, this option is not working anymore (see bug https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=182347).

So, let's copy the files ourselves:

# cp -r work/stage/usr/local/python-debug/* /usr/local/python-debug/

Let's try to run that new python installation:

# setenv LD_LIBRARY_PATH /usr/local/python-debug/lib
# /usr/local/python-debug/bin/python3.4dm
Python 3.4.5 (default, Sep  4 2016, 00:42:59)
[GCC 4.2.1 Compatible FreeBSD Clang 3.4.1 (tags/RELEASE_34/dot1-final 208032)] on freebsd10
Type "help", "copyright", "credits" or "license" for more information.
>>>

The "d" in "dm" means "debug".

Building python also produced an important file that we need to save for later before cleaning the work tree:

# cp work/Python-3.4.5/python-gdb.py ~/

Step 2: building a python aware GDB

Build GDB from the port collection:

  • Making sure the python extensions are enabled
  • Telling the configure script where to find our python installation
  • Telling the configure and build scripts where to find the relevant headers and libraries
# ln -s python3.4 /usr/local/python-debug/bin/python
# cd /usr/ports/devel/gdb
# make PREFIX=/usr/local/python-debug \
OPTIONS_FILE_SET+=PYTHON \
PYTHON_CMD=/usr/local/python-debug/bin \
BATCH=1 \
CFLAGS+="-I/usr/local/python-debug/include -L/usr/local/python-debug/lib" \
CXXFLAGS+="-I/usr/local/python-debug/include -L/usr/local/python-debug/lib"

Also copy that installation manually to our special directory:

# cp -r work/stage/usr/local/python-debug/* /usr/local/python-debug/

Let's check that it's working and has the python extensions:

# /usr/local/python-debug/bin/gdb
GNU gdb (GDB) 7.11.1 [GDB v7.11.1 for FreeBSD]
[...]
(gdb) python
>import gdb
>end
(gdb)

Step 3: wire it all together

Now we have:

  • A version of Python that integrates debug information.
  • A version of GDB that can run higher level GDB scripts written in Python.
  • A python-gdb script to add commands and macros.

Copy the GDB script somewhere where Python can load it:

# mkdir ~/.python_lib
# mv ~/python-gdb.py ~/.python_lib/python34_gdb.py

Let's run our stupid blocking script:

# setenv PATH "/usr/local/python-debug/bin/:${PATH}"
# python where-am-i-blocking.py
[blocked]

In another shell, find the PID of the script, and attach GDB there.

# ps auxw | grep python
root     24226   0.0  0.7  48664  15492  3  I+    3:00AM     0:00.13 python where-am-i-blocking.py (python3.4)
root     24235   0.0  0.1  18824   2004  4  S+    3:00AM     0:00.00 grep python

# setenv PATH "/usr/local/python-debug/bin/:${PATH}"
# gdb python 24226
GNU gdb (GDB) 7.11.1 [GDB v7.11.1 for FreeBSD]
[...]
[Switching to LWP 100160 of process 24226]
0x00000008018e3f18 in _umtx_op () from /lib/libc.so.7
(gdb)

Load the GDB python script:

(gdb) python
>import sys
>sys.path.append('/root/.python_lib')
>import python34_gdb
>end

The python macros are now loaded:

(gdb) py
py-bt               py-down             py-locals           py-up               python-interactive
py-bt-full          py-list             py-print            python

Let's see where we are:

(gdb) py-bt
Traceback (most recent call first):
  <built-in method acquire of _thread.lock object at remote 0x80075c2a8>
  File "/usr/local/python-debug/lib/python3.4/threading.py", line 290, in wait
    waiter.acquire()
  File "/usr/local/python-debug/lib/python3.4/threading.py", line 546, in wait
    signaled = self._cond.wait(timeout)
  File "where-am-i-blocking.py", line 11, in blocking_function_two
    e.wait()
  File "where-am-i-blocking.py", line 16, in <module>
    blocking_function_two()

We're in blocking_function_two.

Let's check the wait's frame local variables:

(gdb) bt
#0  0x00000008018e3f18 in _umtx_op () from /lib/libc.so.7
#1  0x00000008018d3604 in sem_timedwait () from /lib/libc.so.7
#2  0x0000000800eb0421 in PyThread_acquire_lock_timed (lock=0x802417590, microseconds=-1, intr_flag=1) at Python/thread_pthread.h:352
#3  0x0000000800eba84f in acquire_timed (lock=0x802417590, microseconds=-1) at ./Modules/_threadmodule.c:71
#4  0x0000000800ebab82 in lock_PyThread_acquire_lock (self=0x80075c2a8, args=(), kwds=0x0) at ./Modules/_threadmodule.c:139
#5  0x0000000800cfa963 in PyCFunction_Call (func=<built-in method acquire of _thread.lock object at remote 0x80075c2a8>, arg=(), kw=0x0)
    at Objects/methodobject.c:99
#6  0x0000000800e31716 in call_function (pp_stack=0x7fffffff5a00, oparg=0) at Python/ceval.c:4237
#7  0x0000000800e29fc0 in PyEval_EvalFrameEx (
    f=Frame 0x80245d738, for file /usr/local/python-debug/lib/python3.4/threading.py, line 290, in wait (self=<Condition(_lock=<_thread.l
ock at remote 0x80075c510>, _waiters=<collections.deque at remote 0x800b0e9d8>, release=<built-in method release of _thread.lock object a
t remote 0x80075c510>, acquire=<built-in method acquire of _thread.lock object at remote 0x80075c510>) at remote 0x800a6fc88>, timeout=No
ne, waiter=<_thread.lock at remote 0x80075c2a8>, saved_state=None, gotit=False), throwflag=0) at Python/ceval.c:2838
[...]
#25 0x0000000800eb4e86 in run_file (fp=0x801c1e140, filename=0x802418090 L"where-am-i-blocking.py", p_cf=0x7fffffffe978)
    at Modules/main.c:319
#26 0x0000000800eb3ab7 in Py_Main (argc=2, argv=0x802416090) at Modules/main.c:751
#27 0x0000000000400cae in main (argc=2, argv=0x7fffffffeaa8) at ./Modules/python.c:69

(gdb) frame 7
#7  0x0000000800e29fc0 in PyEval_EvalFrameEx (
    f=Frame 0x80245d738, for file /usr/local/python-debug/lib/python3.4/threading.py, line 290, in wait (self=<Condition(_lock=<_thread.lock at remote 0x80075c510>, _waiters=<collections.deque at remote 0x800b0e9d8>, release=<built-in method release of _thread.lock object at remote 0x80075c510>, acquire=<built-in method acquire of _thread.lock object at remote 0x80075c510>) at remote 0x800a6fc88>, timeout=None, waiter=<_thread.lock at remote 0x80075c2a8>, saved_state=None, gotit=False), throwflag=0) at Python/ceval.c:2838
2838                res = call_function(&sp, oparg);

(gdb) py-locals
self = <Condition(_lock=<_thread.lock at remote 0x80075c510>, _waiters=<collections.deque at remote 0x800b0e9d8>, release=<built-in method release of _thread.lock object at remote 0x80075c510>, acquire=<built-in method acquire of _thread.lock object at remote 0x80075c510>) at remote 0x800a6fc88>
timeout = None
waiter = <_thread.lock at remote 0x80075c2a8>
saved_state = None
gotit = False

If you don't want to or can't attach to the running process, you can do the same thing with a core dump:

# gcore 24226
# gdb python core.24226

If you don't want to add the lib directory to the path of python everytime you use GDB, add it to your profile's GDB init script:

cat > ~/.gdbinit <<EOF
python
import sys
sys.path.append('/root/.python_lib')
end
EOF

You'll only need to import the module (python import python34_gdb) and you'll be good to go.

More ressources

Bonus problem: loading Debian's libc's debug info on a armhf

I've done the exact same thing on a Beagle Bone Black system running Debian.

Unfortunately GDB was complaining that the stack was corrupt.

# gdb /usr/local/opt/python-3.4.4/bin/python3.4dm core.18513
GNU gdb (GDB) 7.11
[...]
Reading symbols from /usr/local/opt/python-3.4.4/bin/python3.4dm...done.
[New LWP 18513]
[New LWP 18531]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
Core was generated by `python3.4dm'.
#0  0xb6f7d7e0 in recv () from /lib/arm-linux-gnueabihf/libpthread.so.0
[Current thread is 1 (Thread 0xb6fac000 (LWP 18513))]
(gdb) bt
#0  0xb6f7d7e0 in recv () from /lib/arm-linux-gnueabihf/libpthread.so.0
#1  0xb6f7d7d4 in recv () from /lib/arm-linux-gnueabihf/libpthread.so.0
#2  0xb64ae6f8 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb)

A search on the internet indicated that it was because I was missing package libc6-dbg, but that package was installed.

# apt-get install libc6-dbg
Reading package lists... Done
Building dependency tree
Reading state information... Done
libc6-dbg is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

The problem was that using my custom installation directory had GDB look for these files in the wrong place.

(gdb) show debug-file-directory
The directory where separate debug symbols are searched for is "/usr/local/opt/python-3.4.4/lib/debug".

Setting that variable in the init file solves the problem:

cat >> ~/.gdbinit <<EOF
set debug-file-directory /usr/lib/debug
EOF
[Current thread is 1 (Thread 0xb6fac000 (LWP 18513))]
(gdb) bt
#0  0xb6f7d7e0 in recv () at ../sysdeps/unix/syscall-template.S:82
#1  0xb68fe18e in sock_recv_guts (s=0xb64ae6f8, cbuf=0x50e2e8 '\313' <repeats 199 times>, <incomplete sequence \313>..., len=65536, flags=0)
    at /tmp/Python-3.4.4/Modules/socketmodule.c:2600
[...]
#38 0x00025752 in PyRun_AnyFileExFlags (fp=0x32bcc8, filename=0xb6be5310 "main.py", closeit=1, flags=0xbed1db20) at Python/pythonrun.c:1287
#39 0x0003b1ee in run_file (fp=0x32bcc8, filename=0x2c99f0 L"main.py", p_cf=0xbed1db20) at Modules/main.c:319
#40 0x0003beb8 in Py_Main (argc=2, argv=0x2c9010) at Modules/main.c:751
#41 0x000208d8 in main (argc=2, argv=0xbed1dd14) at ./Modules/python.c:69

How to install PyInstaller in a Python Virtual-Env on FreeBSD

Not so frequently asked questions and stuff: 

ImageThe FreeBSD logo

# uname -a
FreeBSD freebsderlang 10.3-RELEASE FreeBSD 10.3-RELEASE #0 r297264: Fri Mar 25 02:10:02 UTC 2016     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

Install Python and VirtualEnv

Install Python:

# make -C /usr/ports/lang/python34/ install clean

Install virtualenv:

# setenv PYTHON_VERSION python3.4
# make -C /usr/ports/devel/py-virtualenv install clean

Create a virtual env:

# virtualenv-3.4 venv
Using base prefix '/usr/local'
New python executable in /root/somewhere/venv/bin/python3.4
Also creating executable in /root/somewhere/venv/bin/python
Installing setuptools, pip, wheel...done.

Dive into it:

# source venv/bin/activate.csh

Download, build and install PyInstaller

Download the latest version of PyInstaller, check that it was correctly downloaded, and extract it:

[venv] # fetch 'https://github.com/pyinstaller/pyinstaller/releases/download/v3.2/PyInstaller-3.2.tar.gz' --no-verify-peer
[venv] # sha256 PyInstaller-3.2.tar.gz
SHA256 (PyInstaller-3.2.tar.gz) = 7598d4c9f5712ba78beb46a857a493b1b93a584ca59944b8e7b6be00bb89cabc
[venv] # tar xzf PyInstaller-3.2.tar.gz

Go into the bootloader directory and build all:

[venv] # cd PyInstaller-3.2/bootloader/
[venv] # python waf all

Go to the release and build and install as usual:

[venv] # cd ..
[venv] # python setup.py install

Test PyInstaller

[venv] # cat > some_python_script.py << EOF
print("Je suis une saucisse")
EOF
[venv] # pyinstaller --onefile some_python_script.py
[venv] # dist/some_python_script
Je suis une saucisse

How to translate a Flask-App, including content generated by Javascript

AttachmentSize
Package icon flask-i18n-example-master.zip24.34 KB
Not so frequently asked questions and stuff: 

ImageImage

It is well known that if you want your Flask apps to support internationalization, you should use The Flask Mega-Tutorial recommends creating a single javascript file per language. While this method is working, I would prefer using a standard gettext style method.

Meet jsgettext. It is basically a boiled down version of the usual gettext tool that is used everywhere in nearly every programming language. It supports the standard gettext functions, including the one using contexts or plural forms.

Unfortunately, I found that the documentation of its usage was quite limited. This is why I uploaded here a sample app that gets its text translated in 3 different parts:

  • In the Python code.
  • In the Jinga templates.
  • In Javascript generated text

ImageImage

Each part include examples of translations including placeholders and/or plural forms.

You can find the code attached to this post, but for convenience I've also uploaded it on GitHub:
https://github.com/ZeWaren/flask-i18n-example

Creating a simple git repository server (with ACLs) on FreeBSD

Not so frequently asked questions and stuff: 

The FreeBSD logoImage

Let's create a simple git server on FreeBSD.

It should:

  • Allow people to clone/pull/push using both SSH and HTTP.
  • Have a web view.
  • Have ACLs to allow repositories to only be visible and/or accessible by some specific users.

SSH interaction: gitolite

Let's install gitolite. It handles SSH connections and have the ACL functionality we're after.

First, here's a good read about how gitolite works: http://gitolite.com/gitolite/how.html#%281%29

On the git server

Install gitolite:

# make -C /usr/ports/devel/gitolite/ install clean

Copy your public key on the server, naming it [username].pub. That username will be considered the admin user.

Create a UNIX user that will own the files:

# pw useradd gitolite
# mkdir /home/gitolite
# chown gitolite:gitolite /home/gitolite
# cd /home/gitolite

Login as the UNIX user and initialize the system:

# sudo -s -u gitolite
% id
uid=1003(gitolite) gid=1003(gitolite) groups=1003(gitolite)
% /usr/local/bin/gitolite setup -pk admin.pub

Notice that the admin user can login using SSH, and that it will only execute gitolite's shell:

% cat .ssh/authorized_keys
command="/usr/local/libexec/gitolite/gitolite-shell admin",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [some key]== zwm@git.example.net

That's all you need to do on the server.

On your client

Creating users and repositories

Clone the admin repository.

# git clone gitolite@git.example.net:gitolite-admin

Create two new keys (and thus users) and add them to the repository:

# ssh-keygen -t rsa -f erika
# ssh-keygen -t rsa -f jean

# cp erika.pub gitolite-admin/keydir
# cp jean.pub gitolite-admin/keydir

# git add keydir/jean.pub
# git add keydir/erika.pub
# git commit -m "Add users Jean and Erika."
# git push origin master

Create new repositories by setting their ACLs in the config file:

# cat conf/gitolite.conf:

repo gitolite-admin
    RW+     =   admin

repo testing
    RW+     =   @all

repo erika_only
    RW+     =   erika

repo erika_and_jean
    RW+     =   erika jean

# git add conf/gitolite.conf
# git commit -m "Add two new repos"
# git push origin master

Using the server

Try to clone repository erika_only with user jean:

# setenv GIT_SSH_COMMAND 'ssh -i jean'
# git clone gitolite@git.example.net:erika_only
Cloning into 'erika_only'...
FATAL: R any erika_only jean DENIED by fallthru
(or you mis-spelled the reponame)
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Our access was denied. ACLs are working.

Try to clone a ACL allowed repository:

# git clone gitolite@git.example.net:erika_and_jean
# cd 
# echo "Test" > test.txt
# git add test.txt
# git commit -m "Test commit"
# git push origin master
Counting objects: 3, done.
Writing objects: 100% (3/3), 218 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To gitolite@git.example.net:erika_and_jean
 * [new branch]      master -> master

Success.

HTTP interaction: nginx+git-http-backend

I assume you already know how to install and do the basic configuration of nginx.

Install fcgiwrap:

# make -C /usr/ports/www/fcgiwrap install clean

Configure fcgiwrap to use the right UNIX user:
/etc/rc.conf:

fcgiwrap_enable="YES"
fcgiwrap_user="gitolite"
fcgiwrap_profiles="gitolite"
fcgiwrap_gitolite_socket="tcp:198.51.100.42:7081"

Create a password file:

# cat /usr/local/etc/nginx/git_users.htpasswd
jean:$apr1$fkADkYbl$Doen7IMxNwmD/r6X1LdM.1
erika:$apr1$fOOlnSig$4PONnRHK3PMu8j1HnxECc0

Use openssl passwd -apr1 to generate passwords.

Configure nginx:

    server {
        [usual config here]

        auth_basic           "RESTRICTED ACCESS";
        auth_basic_user_file /usr/local/etc/nginx/git_users.htpasswd;
        client_max_body_size 256m;

        location ~ /git(/.*) {
            root /home/gitolite/;
            fastcgi_split_path_info ^(/git)(.*)$;
            fastcgi_param PATH_INFO $fastcgi_path_info;
            fastcgi_param SCRIPT_FILENAME     /usr/local/libexec/gitolite/gitolite-shell;
            fastcgi_param QUERY_STRING $query_string;
            fastcgi_param REMOTE_USER        $remote_user;

            fastcgi_param GIT_PROJECT_ROOT    /home/gitolite/repositories;
            fastcgi_param GIT_HTTP_BACKEND /usr/local/libexec/git-core/git-http-backend;
            fastcgi_param GITOLITE_HTTP_HOME /home/gitolite;
            fastcgi_param GIT_HTTP_EXPORT_ALL "";

            # This include must be AFTER the above declaration. Otherwise, SCRIPT_FILENAME will be set incorrectly and the shell will 403.
            include       fastcgi_params;
            fastcgi_pass 198.51.100.42:7081;
        }
    }

Here we call gitolite-shell instead of git-http-backend directly to have gitolite check the users' permissions.

Let's clone a repository, add a commit and push it:

# git clone 'http://jean:lol@git.example.net:8080/git/erika_and_jean.git' erika_and_jean
Cloning into 'erika_and_jean'...
remote: Counting objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
Checking connectivity... done.

# cd erika_and_jean/
root@test:~/gitolite2/erika_and_jean # vim test.txt
root@test:~/gitolite2/erika_and_jean # git add test.txt
root@test:~/gitolite2/erika_and_jean # git commit -m "Pushed from HTTP"
[master 7604185] Pushed from HTTP
 1 file changed, 1 insertion(+)

# git push origin master
Counting objects: 3, done.
Writing objects: 100% (3/3), 258 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To http://jean:lol@git.example.net:8080/git/erika_and_jean.git
   fa03b7d..7604185  master -> master

Let's try to clone a repository we're not allowed to see:

# git clone 'http://jean:lol@git.example.net:8080/git/erika.git' erika
Cloning into 'erika'...
fatal: remote error: FATAL: R any erika jean DENIED by fallthru
(or you mis-spelled the reponame)

ACLs are working. Success.

Web view: GitWeb

Make sure git is compiled with option GITWEB.

Copy the gitweb files where nginx will look for them:

# cp -r /usr/local/share/examples/git/gitweb /usr/local/www/gitweb

Configure nginx:

      location / {
          root /usr/local/www/gitweb;
          index gitweb.cgi;

          location ~ ^/(.*\.cgi)$ {
              include  fastcgi_params;
              fastcgi_pass 198.51.100.42:7081;
              fastcgi_index gitweb.cgi;
              fastcgi_param SCRIPT_FILENAME /usr/local/www/gitweb/gitweb.cgi;
              fastcgi_param DOCUMENT_ROOT /usr/local/www/gitweb;
              fastcgi_param GITWEB_CONFIG /usr/local/etc/gitweb.conf;
              fastcgi_param REMOTE_USER        $remote_user;
          }
      }

No magic here. The Gitolite/GitWeb interaction is irrelevant to the webserver.

Use the gitolite command to find the values of the GL_ variables:

gitolite query-rc -a

Configure gitweb in /usr/local/etc/gitweb.conf:

BEGIN {
    $ENV{HOME} = "/home/gitolite";
    $ENV{GL_BINDIR} = "/usr/local/libexec/gitolite";
    $ENV{GL_LIBDIR} = "/usr/local/libexec/gitolite/lib";
}

use lib $ENV{GL_LIBDIR};
use Gitolite::Easy;

$projectroot = $ENV{GL_REPO_BASE};
our $site_name = "Example.net Git viewer";

$ENV{GL_USER} = $cgi->remote_user || "gitweb";

$export_auth_hook = sub {
    my $repo = shift;
    # gitweb passes us the full repo path; we need to strip the beginning and
    # the end, to get the repo name as it is specified in gitolite conf
    return unless $repo =~ s/^\Q$projectroot\E\/?(.+)\.git$/$1/;

    # call Easy.pm's 'can_read' function
    return can_read($repo);
};

When connected as erika:

Image

When connected as jean:

Image

ACLs are working. Success.

Conclusion

Our users can now see, read and sometimes write into the repositories of our git server.

You can create guest accounts that will only be able to see specific repositories, and they won't even know the other ones are here.

No need to maintain a gitlab instance if your needs are simple.

Jabber/XMPP transport for Google Hangouts

Not so frequently asked questions and stuff: 

ImageImage

Social pressure forces me to communicate with people using Hangouts, Google's instant messaging platform.

Unfortunately, the only way to use it is either on Gmail, or using the worst client you've ever seen: a Google Chrome extension.

My operating system is not a f*cking web browser!

What if I want to?:

  • Copy links in chat messages without opening them?
  • Open links in my default web browser and not in Chrome?
  • Use the integrated notification system of my operating system and not some weak-ass "tab title change"?
  • Use IM on slow hardware?

Being an active user of Jabber/XMPP, I decided the best way to solve all these problems at once would be to write a transport for my Jabber server.

This transport can be found on GitHub:
https://github.com/ZeWaren/jabber-hangouts-transport

That way, I can communicate with people using Hangouts with any Jabber client.

Here are a few screenshots from Psi and Pidgin:

ImageImageImageImage

Building and running Couchbase on FreeBSD

Not so frequently asked questions and stuff: 

The FreeBSD logoThe Couchbase logo

Lets's try to build and run Couchbase on FreeBSD!

The system I'm using here is completely new.

# uname -a
FreeBSD couchbasebsd 10.2-RELEASE FreeBSD 10.2-RELEASE #0 r286666: Wed Aug 12 15:26:37 UTC 2015     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

Fetching the source

Let's download repo, Google's tool to fetch multiple git repositories at once.

# fetch https://storage.googleapis.com/git-repo-downloads/repo -o /root/bin/repo --no-verify-peer
/root/bin/repo                                100% of   25 kB 1481 kBps 00m00s

I don't have any certificate bundle installed, so I need --no-verify-peer to prevent openssl from complaining. In that case I must verify that the file is correct before executing it.

# sha1 /root/bin/repo
SHA1 (/root/bin/repo) = da0514e484f74648a890c0467d61ca415379f791

The list of SHA1s can be found in Android Open Source Project - Downloading the Source .

Make it executable.

# chmod +x /root/bin/repo

Create a directory to work in.

# mkdir couchbase && cd couchbase

I'll be fetching branch 3.1.1, which is the latest release at the time I'm writing this.

# /root/bin/repo init -u git://github.com/couchbase/manifest -m released/3.1.1.xml
env: python: No such file or directory

Told you the system was brand new.

# make -C /usr/ports/lang/python install clean

Let's try again.

# /root/bin/repo init -u git://github.com/couchbase/manifest -m released/3.1.1.xml

fatal: 'git' is not available
fatal: [Errno 2] No such file or directory

Please make sure git is installed and in your path.

Install git:

# make -C /usr/ports/devel/git install clean

Try again:

# /root/bin/repo init -u git://github.com/couchbase/manifest -m released/3.1.1.xml

Traceback (most recent call last):
  File "/root/couchbase/.repo/repo/main.py", line 526, in <module>
    _Main(sys.argv[1:])
  File "/root/couchbase/.repo/repo/main.py", line 502, in _Main
    result = repo._Run(argv) or 0
  File "/root/couchbase/.repo/repo/main.py", line 175, in _Run
    result = cmd.Execute(copts, cargs)
  File "/root/couchbase/.repo/repo/subcmds/init.py", line 395, in Execute
    self._ConfigureUser()
  File "/root/couchbase/.repo/repo/subcmds/init.py", line 289, in _ConfigureUser
    name  = self._Prompt('Your Name', mp.UserName)
  File "/root/couchbase/.repo/repo/project.py", line 703, in UserName
    self._LoadUserIdentity()
  File "/root/couchbase/.repo/repo/project.py", line 716, in _LoadUserIdentity
    u = self.bare_git.var('GIT_COMMITTER_IDENT')
  File "/root/couchbase/.repo/repo/project.py", line 2644, in runner
    p.stderr))
error.GitError: manifests var:
*** Please tell me who you are.

Run

  git config --global user.email "you@example.com"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: unable to auto-detect email address (got 'root@couchbasebsd.(none)')

Configure your git information:

# git config --global user.email "you@example.com"
# git config --global user.name "Your Name"

Try again:

# /root/bin/repo init -u git://github.com/couchbase/manifest -m released/3.1.1.xml

Your identity is: Your Name <you@example.com>
If you want to change this, please re-run 'repo init' with --config-name

[...]

repo has been initialized in /root/couchbase

Repo was initialized successfully. Let's sync!

# repo sync
[...]
Fetching projects: 100% (25/25), done.
Checking out files: 100% (2988/2988), done.ut files:  21% (641/2988)
Checking out files: 100% (11107/11107), done. files:   2% (236/11107)
Checking out files: 100% (3339/3339), done.ut files:  11% (379/3339)
Checking out files: 100% (1256/1256), done.ut files:  48% (608/1256)
Checking out files: 100% (4298/4298), done.ut files:   0% (27/4298)
Syncing work tree: 100% (25/25), done.

We now have our source environment setup.

Building

Let's invoke the makefile.

# gmake
(cd build && cmake -G "Unix Makefiles" -D CMAKE_INSTALL_PREFIX="/root/couchbase/install" -D CMAKE_PREFIX_PATH=";/root/couchbase/install" -D PRODUCT_VERSION= -D BUILD_ENTERPRISE= -D CMAKE_BUILD_TYPE=Debug  ..)
cmake: not found
Makefile:42: recipe for target 'build/Makefile' failed
gmake[1]: *** [build/Makefile] Error 127
GNUmakefile:5: recipe for target 'all' failed
gmake: *** [all] Error 2

CMake is missing? Let's install it.

# make -C /usr/ports/devel/cmake install clean

Let's try again...

# gmake

CMake Error at tlm/cmake/Modules/FindCouchbaseTcMalloc.cmake:38 (MESSAGE):
  Can not find tcmalloc.  Exiting.
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseMemoryAllocator.cmake:3 (INCLUDE)
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:11 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

-- Configuring incomplete, errors occurred!
See also "/root/couchbase/build/CMakeFiles/CMakeOutput.log".
Makefile:42: recipe for target 'build/Makefile' failed
gmake[1]: *** [build/Makefile] Error 1
GNUmakefile:5: recipe for target 'all' failed
gmake: *** [all] Error 2

What the hell is the system looking for?

# cat tlm/cmake/Modules/FindCouchbaseTcMalloc.cmake
[...]
FIND_PATH(TCMALLOC_INCLUDE_DIR gperftools/malloc_hook_c.h
          PATHS
              ${_gperftools_exploded}/include)
[...]

Where is that malloc_hook_c.h?

# grep -R gperftools/malloc_hook_c.h *
gperftools/Makefile.am:                                    src/gperftools/malloc_hook_c.h \
gperftools/Makefile.am:                               src/gperftools/malloc_hook_c.h \
gperftools/Makefile.am:##                           src/gperftools/malloc_hook_c.h \
gperftools/src/google/malloc_hook_c.h:#warning "google/malloc_hook_c.h is deprecated. Use gperftools/malloc_hook_c.h instead"
gperftools/src/google/malloc_hook_c.h:#include <gperftools/malloc_hook_c.h>
gperftools/src/gperftools/malloc_hook.h:#include <gperftools/malloc_hook_c.h>  // a C version of the malloc_hook interface
gperftools/src/tests/malloc_extension_c_test.c:#include <gperftools/malloc_hook_c.h>
[...]

It's in directory gperftools. Let's build that module first.

# cd gperftools/

# ./autogen.sh

# ./configure
[...]
config.status: creating Makefile
config.status: creating src/gperftools/tcmalloc.h
config.status: creating src/windows/gperftools/tcmalloc.h
config.status: creating src/config.h
config.status: executing depfiles commands
config.status: executing libtool commands

# make && make install

Let's try to build again.

# cd ..

# make
CMake Error at tlm/cmake/Modules/FindCouchbaseIcu.cmake:108 (MESSAGE):
  Can't build Couchbase without ICU
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:16 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

ICU is missing? Let's install it.

# make -C /usr/ports/devel/icu install clean

Let's try again.

# make
CMake Error at tlm/cmake/Modules/FindCouchbaseSnappy.cmake:34 (MESSAGE):
  Can't build Couchbase without Snappy
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:17 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

snappy is missing? Let's install it.

# make -C /usr/ports/archivers/snappy install

Do not make the mistake of installing multimedia/snappy instead. This is a totally unrelated module, and it will install 175 crappy Linux/X11 dependencies on your system.

Let's try again:

# gmake
CMake Error at tlm/cmake/Modules/FindCouchbaseV8.cmake:52 (MESSAGE):
  Can't build Couchbase without V8
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:18 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

V8 is missing? Let's install it.

# make -C /usr/ports/lang/v8 install clean

Let's try again.

# gmake
CMake Error at tlm/cmake/Modules/FindCouchbaseErlang.cmake:80 (MESSAGE):
  Erlang not found - cannot continue building
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:21 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

Erlang FTW!

# make -C /usr/ports/lang/erlang install clean

Let's build again.

/root/couchbase/platform/src/cb_time.c:60:2: error: "Don't know how to build cb_get_monotonic_seconds"
#error "Don't know how to build cb_get_monotonic_seconds"
 ^
1 error generated.
platform/CMakeFiles/platform.dir/build.make:169: recipe for target 'platform/CMakeFiles/platform.dir/src/cb_time.c.o' failed
gmake[4]: *** [platform/CMakeFiles/platform.dir/src/cb_time.c.o] Error 1
CMakeFiles/Makefile2:285: recipe for target 'platform/CMakeFiles/platform.dir/all' failed
gmake[3]: *** [platform/CMakeFiles/platform.dir/all] Error 2
Makefile:126: recipe for target 'all' failed
gmake[2]: *** [all] Error 2
Makefile:36: recipe for target 'compile' failed
gmake[1]: *** [compile] Error 2
GNUmakefile:5: recipe for target 'all' failed
gmake: *** [all] Error 2

At last! A real error. Let's see the code.

# cat platform/src/cb_time.c

/*
    return a monotonically increasing value with a seconds frequency.
*/
uint64_t cb_get_monotonic_seconds() {
    uint64_t seconds = 0;
#if defined(WIN32)
    /* GetTickCound64 gives us near 60years of ticks...*/
    seconds =  (GetTickCount64() / 1000);
#elif defined(__APPLE__)
    uint64_t time = mach_absolute_time();

    static mach_timebase_info_data_t timebase;
    if (timebase.denom == 0) {
      mach_timebase_info(&timebase);
    }

    seconds = (double)time * timebase.numer / timebase.denom * 1e-9;
#elif defined(__linux__) || defined(__sun)
    /* Linux and Solaris can use clock_gettime */
    struct timespec tm;
    if (clock_gettime(CLOCK_MONOTONIC, &tm) == -1) {
        abort();
    }
    seconds = tm.tv_sec;
#else
#error "Don't know how to build cb_get_monotonic_seconds"
#endif

    return seconds;
}

FreeBSD also has clock_gettime, so let's patch the file:

diff -u platform/src/cb_time.c.orig platform/src/cb_time.c
--- platform/src/cb_time.c.orig 2015-10-07 19:26:14.258513000 +0200
+++ platform/src/cb_time.c      2015-10-07 19:26:29.768324000 +0200
@@ -49,7 +49,7 @@
     }

     seconds = (double)time * timebase.numer / timebase.denom * 1e-9;
-#elif defined(__linux__) || defined(__sun)
+#elif defined(__linux__) || defined(__sun) || defined(__FreeBSD__)
     /* Linux and Solaris can use clock_gettime */
     struct timespec tm;
     if (clock_gettime(CLOCK_MONOTONIC, &tm) == -1) {

Next error, please.

# gmake
Linking CXX shared library libplatform.so
/usr/bin/ld: cannot find -ldl
CC: error: linker command failed with exit code 1 (use -v to see invocation)
platform/CMakeFiles/platform.dir/build.make:210: recipe for target 'platform/libplatform.so.0.1.0' failed
gmake[4]: *** [platform/libplatform.so.0.1.0] Error 1

Aaah, good old Linux dl library. Let's get rid of that in the Makefile:

diff -u CMakeLists.txt.orig CMakeLists.txt
--- CMakeLists.txt.orig 2015-10-07 19:30:45.546580000 +0200
+++ CMakeLists.txt      2015-10-07 19:36:27.052693000 +0200
@@ -34,7 +34,9 @@
 ELSE (WIN32)
    SET(PLATFORM_FILES src/cb_pthreads.c src/urandom.c)
    SET(THREAD_LIBS "pthread")
-   SET(DLOPENLIB "dl")
+   IF(NOT CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
+      SET(DLOPENLIB "dl")
+   ENDIF(NOT CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")

    IF (NOT APPLE)
       SET(RTLIB "rt")

Next!

FreeBSD has Dtrace, but not the same as Solaris, so we must disable it.

Someone already did that: see commit Disable DTrace for FreeBSD for the patch:

--- a/cmake/Modules/FindCouchbaseDtrace.cmake
+++ b/cmake/Modules/FindCouchbaseDtrace.cmake
@@ -1,18 +1,19 @@
-# stupid systemtap use a binary named dtrace as well..
+# stupid systemtap use a binary named dtrace as well, but it's not dtrace
+IF (NOT CMAKE_SYSTEM_NAME STREQUAL "Linux")
+   IF (CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
+      MESSAGE(STATUS "We don't have support for DTrace on FreeBSD")
+   ELSE (CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
+      FIND_PROGRAM(DTRACE dtrace)
+      IF (DTRACE)
+         SET(ENABLE_DTRACE True CACHE BOOL "Whether DTrace has been found")
+         MESSAGE(STATUS "Found dtrace in ${DTRACE}")
 
-IF (NOT ${CMAKE_SYSTEM_NAME} STREQUAL "Linux")
+         IF (CMAKE_SYSTEM_NAME MATCHES "SunOS")
+            SET(DTRACE_NEED_INSTUMENT True CACHE BOOL
+                "Whether DTrace should instrument object files")
+         ENDIF (CMAKE_SYSTEM_NAME MATCHES "SunOS")
+      ENDIF (DTRACE)
 
-FIND_PROGRAM(DTRACE dtrace)
-IF (DTRACE)
-   SET(ENABLE_DTRACE True CACHE BOOL "Whether DTrace has been found")
-   MESSAGE(STATUS "Found dtrace in ${DTRACE}")
-
-   IF (CMAKE_SYSTEM_NAME MATCHES "SunOS")
-      SET(DTRACE_NEED_INSTUMENT True CACHE BOOL
-          "Whether DTrace should instrument object files")
-   ENDIF (CMAKE_SYSTEM_NAME MATCHES "SunOS")
-ENDIF (DTRACE)
-
-MARK_AS_ADVANCED(DTRACE_NEED_INSTUMENT ENABLE_DTRACE DTRACE)
-
-ENDIF (NOT ${CMAKE_SYSTEM_NAME} STREQUAL "Linux")
+      MARK_AS_ADVANCED(DTRACE_NEED_INSTUMENT ENABLE_DTRACE DTRACE)
+   ENDIF (CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
+ENDIF (NOT CMAKE_SYSTEM_NAME STREQUAL "Linux")

Next!

# gmake
Linking C executable couch_compact
libcouchstore.so: undefined reference to `fdatasync'
cc: error: linker command failed with exit code 1 (use -v to see invocation)
couchstore/CMakeFiles/couch_compact.dir/build.make:89: recipe for target 'couchstore/couch_compact' failed
gmake[4]: *** [couchstore/couch_compact] Error 1
CMakeFiles/Makefile2:1969: recipe for target 'couchstore/CMakeFiles/couch_compact.dir/all' failed

FreeBSD does not have fdatasync. Instead, we should use fsync:

diff -u couchstore/config.cmake.h.in.orig couchstore/config.cmake.h.in
--- couchstore/config.cmake.h.in.orig   2015-10-07 19:56:05.461932000 +0200
+++ couchstore/config.cmake.h.in        2015-10-07 19:56:42.973040000 +0200
@@ -38,10 +38,10 @@
 #include <unistd.h>
 #endif

-#ifdef __APPLE__
-/* autoconf things OS X has fdatasync but it doesn't */
+#if defined(__APPLE__) || defined(__FreeBSD__)
+/* autoconf things OS X  and FreeBSD have fdatasync but they don't */
 #define fdatasync(FD) fsync(FD)
-#endif /* __APPLE__ */
+#endif /* __APPLE__ || __FreeBSD__ */

 #include <platform/platform.h>

Next!

[ 56%] Building C object sigar/build-src/CMakeFiles/sigar.dir/sigar.c.o
/root/couchbase/sigar/src/sigar.c:1071:12: fatal error: 'utmp.h' file not found
#  include <utmp.h>
           ^
1 error generated.
sigar/build-src/CMakeFiles/sigar.dir/build.make:77: recipe for target 'sigar/build-src/CMakeFiles/sigar.dir/sigar.c.o' failed
gmake[4]: *** [sigar/build-src/CMakeFiles/sigar.dir/sigar.c.o] Error 1
CMakeFiles/Makefile2:4148: recipe for target 'sigar/build-src/CMakeFiles/sigar.dir/all' failed

I was planning to port that file to utmpx, and then I wondered how the freebsd port of the library (java/sigar) was working. Then I found the patch has already been done:

Commit "Make utmp-handling more standards-compliant. " on Github -> amishHammer -> sigar (https://github.com/amishHammer/sigar/commit/67b476efe0f2a7c644f3966b79f5e358f67752e9)

diff --git a/src/sigar.c b/src/sigar.c
index 8bd7e91..7f76dfd 100644
--- a/src/sigar.c
+++ b/src/sigar.c
@@ -30,6 +30,11 @@
 #ifndef WIN32
 #include <arpa/inet.h>
 #endif
+#if defined(HAVE_UTMPX_H)
+# include <utmpx.h>
+#elif defined(HAVE_UTMP_H)
+# include <utmp.h>
+#endif
 
 #include "sigar.h"
 #include "sigar_private.h"
@@ -1024,40 +1029,7 @@ SIGAR_DECLARE(int) sigar_who_list_destroy(sigar_t *sigar,
     return SIGAR_OK;
 }
 
-#ifdef DARWIN
-#include <AvailabilityMacros.h>
-#endif
-#ifdef MAC_OS_X_VERSION_10_5
-#  if MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_X_VERSION_10_5
-#    define SIGAR_NO_UTMP
-#  endif
-/* else 10.4 and earlier or compiled with -mmacosx-version-min=10.3 */
-#endif
-
-#if defined(__sun)
-#  include <utmpx.h>
-#  define SIGAR_UTMP_FILE _UTMPX_FILE
-#  define ut_time ut_tv.tv_sec
-#elif defined(WIN32)
-/* XXX may not be the default */
-#define SIGAR_UTMP_FILE "C:\\cygwin\\var\\run\\utmp"
-#define UT_LINESIZE    16
-#define UT_NAMESIZE    16
-#define UT_HOSTSIZE    256
-#define UT_IDLEN   2
-#define ut_name ut_user
-
-struct utmp {
-    short ut_type; 
-    int ut_pid;        
-    char ut_line[UT_LINESIZE];
-    char ut_id[UT_IDLEN];
-    time_t ut_time;    
-    char ut_user[UT_NAMESIZE]; 
-    char ut_host[UT_HOSTSIZE]; 
-    long ut_addr;  
-};
-#elif defined(NETWARE)
+#if defined(NETWARE)
 static char *getpass(const char *prompt)
 {
     static char password[BUFSIZ];
@@ -1067,109 +1039,48 @@ static char *getpass(const char *prompt)
 
     return (char *)&password;
 }
-#elif !defined(SIGAR_NO_UTMP)
-#  include <utmp.h>
-#  ifdef UTMP_FILE
-#    define SIGAR_UTMP_FILE UTMP_FILE
-#  else
-#    define SIGAR_UTMP_FILE _PATH_UTMP
-#  endif
-#endif
-
-#if defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__) || defined(DARWIN)
-#  define ut_user ut_name
 #endif
 
-#ifdef DARWIN
-/* XXX from utmpx.h; sizeof changed in 10.5 */
-/* additionally, utmpx does not work on 10.4 */
-#define SIGAR_HAS_UTMPX
-#define _PATH_UTMPX     "/var/run/utmpx"
-#define _UTX_USERSIZE   256     /* matches MAXLOGNAME */
-#define _UTX_LINESIZE   32
-#define _UTX_IDSIZE     4
-#define _UTX_HOSTSIZE   256
-struct utmpx {
-    char ut_user[_UTX_USERSIZE];    /* login name */
-    char ut_id[_UTX_IDSIZE];        /* id */
-    char ut_line[_UTX_LINESIZE];    /* tty name */
-    pid_t ut_pid;                   /* process id creating the entry */
-    short ut_type;                  /* type of this entry */
-    struct timeval ut_tv;           /* time entry was created */
-    char ut_host[_UTX_HOSTSIZE];    /* host name */
-    __uint32_t ut_pad[16];          /* reserved for future use */
-};
-#define ut_xtime ut_tv.tv_sec
-#define UTMPX_USER_PROCESS      7
-/* end utmpx.h */
-#define SIGAR_UTMPX_FILE _PATH_UTMPX
-#endif
-
-#if !defined(NETWARE) && !defined(_AIX)
-
 #define WHOCPY(dest, src) \
     SIGAR_SSTRCPY(dest, src); \
     if (sizeof(src) < sizeof(dest)) \
         dest[sizeof(src)] = '\0'
 
-#ifdef SIGAR_HAS_UTMPX
-static int sigar_who_utmpx(sigar_t *sigar,
-                           sigar_who_list_t *wholist)
+static int sigar_who_utmp(sigar_t *sigar,
+                          sigar_who_list_t *wholist)
 {
-    FILE *fp;
-    struct utmpx ut;
+#if defined(HAVE_UTMPX_H)
+    struct utmpx *ut;
 
-    if (!(fp = fopen(SIGAR_UTMPX_FILE, "r"))) {
-        return errno;
-    }
+    setutxent();
 
-    while (fread(&ut, sizeof(ut), 1, fp) == 1) {
+    while ((ut = getutxent()) != NULL) {
         sigar_who_t *who;
 
-        if (*ut.ut_user == '\0') {
+        if (*ut->ut_user == '\0') {
             continue;
         }
 
-#ifdef UTMPX_USER_PROCESS
-        if (ut.ut_type != UTMPX_USER_PROCESS) {
+        if (ut->ut_type != USER_PROCESS) {
             continue;
         }
-#endif
 
         SIGAR_WHO_LIST_GROW(wholist);
         who = &wholist->data[wholist->number++];
 
-        WHOCPY(who->user, ut.ut_user);
-        WHOCPY(who->device, ut.ut_line);
-        WHOCPY(who->host, ut.ut_host);
+        WHOCPY(who->user, ut->ut_user);
+        WHOCPY(who->device, ut->ut_line);
+        WHOCPY(who->host, ut->ut_host);
 
-        who->time = ut.ut_xtime;
+        who->time = ut->ut_tv.tv_sec;
     }
 
-    fclose(fp);
-
-    return SIGAR_OK;
-}
-#endif
-
-#if defined(SIGAR_NO_UTMP) && defined(SIGAR_HAS_UTMPX)
-#define sigar_who_utmp sigar_who_utmpx
-#else
-static int sigar_who_utmp(sigar_t *sigar,
-                          sigar_who_list_t *wholist)
-{
+    endutxent();
+#elif defined(HAVE_UTMP_H)
     FILE *fp;
-#ifdef __sun
-    /* use futmpx w/ pid32_t for sparc64 */
-    struct futmpx ut;
-#else
     struct utmp ut;
-#endif
-    if (!(fp = fopen(SIGAR_UTMP_FILE, "r"))) {
-#ifdef SIGAR_HAS_UTMPX
-        /* Darwin 10.5 */
-        return sigar_who_utmpx(sigar, wholist);
-#endif
+
+    if (!(fp = fopen(_PATH_UTMP, "r"))) {
         return errno;
     }
 
@@ -1189,7 +1100,7 @@ static int sigar_who_utmp(sigar_t *sigar,
         SIGAR_WHO_LIST_GROW(wholist);
         who = &wholist->data[wholist->number++];
 
-        WHOCPY(who->user, ut.ut_user);
+        WHOCPY(who->user, ut.ut_name);
         WHOCPY(who->device, ut.ut_line);
         WHOCPY(who->host, ut.ut_host);
 
@@ -1197,11 +1108,10 @@ static int sigar_who_utmp(sigar_t *sigar,
     }
 
     fclose(fp);
+#endif
 
     return SIGAR_OK;
 }
-#endif /* SIGAR_NO_UTMP */
-#endif /* NETWARE */
 
 #if defined(WIN32)

Next!

# gmake
[ 75%] Generating couch_btree.beam
compile: warnings being treated as errors
/root/couchbase/couchdb/src/couchdb/couch_btree.erl:415: variable 'NodeList' exported from 'case' (line 391)
/root/couchbase/couchdb/src/couchdb/couch_btree.erl:1010: variable 'NodeList' exported from 'case' (line 992)
couchdb/src/couchdb/CMakeFiles/couchdb.dir/build.make:151: recipe for target 'couchdb/src/couchdb/couch_btree.beam' failed
gmake[4]: *** [couchdb/src/couchdb/couch_btree.beam] Error 1
CMakeFiles/Makefile2:5531: recipe for target 'couchdb/src/couchdb/CMakeFiles/couchdb.dir/all' failed

Fortunately, I'm fluent in Erlang.

I'm not sure why compiler option +warn_export_vars was set if the code does contains such errors. Let's fix them.

diff -u /root/couchbase/couchdb/src/couchdb/couch_btree.erl /root/couchbase/couchdb/src/couchdb/couch_btree.erl.orig
--- /root/couchbase/couchdb/src/couchdb/couch_btree.erl 2015-10-07 22:01:05.191344000 +0200
+++ /root/couchbase/couchdb/src/couchdb/couch_btree.erl.orig    2015-10-07 21:59:43.359322000 +0200
@@ -388,12 +388,13 @@
     end.

 modify_node(Bt, RootPointerInfo, Actions, QueryOutput, Acc, PurgeFun, PurgeFunAcc, KeepPurging) ->
-    {NodeType, NodeList} = case RootPointerInfo of
+    case RootPointerInfo of
     nil ->
-        {kv_node, []};
+        NodeType = kv_node,
+        NodeList = [];
     _Tuple ->
         Pointer = element(1, RootPointerInfo),
-        get_node(Bt, Pointer)
+        {NodeType, NodeList} = get_node(Bt, Pointer)
     end,

     case NodeType of
@@ -988,12 +989,13 @@

 guided_purge(Bt, NodeState, GuideFun, GuideAcc) ->
     % inspired by modify_node/5
-    {NodeType, NodeList} = case NodeState of
+    case NodeState of
     nil ->
-        {kv_node, []};
+        NodeType = kv_node,
+        NodeList = [];
     _Tuple ->
         Pointer = element(1, NodeState),
-        get_node(Bt, Pointer)
+        {NodeType, NodeList} = get_node(Bt, Pointer)
     end,
     {ok, NewNodeList, GuideAcc2, Bt2, Go} =
     case NodeType of

diff -u /root/couchbase/couchdb/src/couchdb/couch_compaction_daemon.erl.orig /root/couchbase/couchdb/src/couchdb/couch_compaction_daemon.erl
--- /root/couchbase/couchdb/src/couchdb/couch_compaction_daemon.erl.orig        2015-10-07 22:01:48.495966000 +0200
+++ /root/couchbase/couchdb/src/couchdb/couch_compaction_daemon.erl     2015-10-07 22:02:15.620989000 +0200
@@ -142,14 +142,14 @@
         true ->
             {ok, DbCompactPid} = couch_db:start_compact(Db),
             TimeLeft = compact_time_left(Config),
-            case Config#config.parallel_view_compact of
+            ViewsMonRef = case Config#config.parallel_view_compact of
             true ->
                 ViewsCompactPid = spawn_link(fun() ->
                     maybe_compact_views(DbName, DDocNames, Config)
                 end),
-                ViewsMonRef = erlang:monitor(process, ViewsCompactPid);
+                erlang:monitor(process, ViewsCompactPid);
             false ->
-                ViewsMonRef = nil
+                nil
             end,
             DbMonRef = erlang:monitor(process, DbCompactPid),
             receive

Next!

[ 84%] Generating ebin/couch_set_view_group.beam
/root/couchbase/couchdb/src/couch_set_view/src/couch_set_view_group.erl:3178: type dict() undefined
couchdb/src/couch_set_view/CMakeFiles/couch_set_view.dir/build.make:87: recipe for target 'couchdb/src/couch_set_view/ebin/couch_set_view_group.beam' failed
gmake[4]: *** [couchdb/src/couch_set_view/ebin/couch_set_view_group.beam] Error 1
CMakeFiles/Makefile2:5720: recipe for target 'couchdb/src/couch_set_view/CMakeFiles/couch_set_view.dir/all' failed
gmake[3]: *** [couchdb/src/couch_set_view/CMakeFiles/couch_set_view.dir/all] Error 2

Those happen since I'm building the project with Erlang 18. I guess I wouldn't have had them with version 17.

Anyway, let's fix them.

--- couchdb.orig/src/couch_dcp/src/couch_dcp_client.erl 2015-10-08 11:26:37.034138000 +0200
+++ couchdb/src/couch_dcp/src/couch_dcp_client.erl  2015-10-07 22:07:35.556126000 +0200
@@ -47,13 +47,13 @@
     bufsocket = nil                 :: #bufsocket{} | nil,
     timeout = 5000                  :: timeout(),
     request_id = 0                  :: request_id(),
-    pending_requests = dict:new()   :: dict(),
-    stream_queues = dict:new()      :: dict(),
+    pending_requests = dict:new()   :: dict:dict(),
+    stream_queues = dict:new()      :: dict:dict(),
     active_streams = []             :: list(),
     worker_pid                      :: pid(),
     max_buffer_size = ?MAX_BUF_SIZE :: integer(),
     total_buffer_size = 0           :: non_neg_integer(),
-    stream_info = dict:new()        :: dict(),
+    stream_info = dict:new()        :: dict:dict(),
     args = []                       :: list()
 }).
 
@@ -1378,7 +1378,7 @@
         {error, Error}
     end.
 
--spec get_queue_size(queue(), non_neg_integer()) -> non_neg_integer().
+-spec get_queue_size(queue:queue(), non_neg_integer()) -> non_neg_integer().
 get_queue_size(EvQueue, Size) ->
     case queue:out(EvQueue) of
     {empty, _} ->
diff -r -u couchdb.orig/src/couch_set_view/src/couch_set_view_group.erl couchdb/src/couch_set_view/src/couch_set_view_group.erl
--- couchdb.orig/src/couch_set_view/src/couch_set_view_group.erl    2015-10-08 11:26:37.038856000 +0200
+++ couchdb/src/couch_set_view/src/couch_set_view_group.erl 2015-10-07 22:04:53.198951000 +0200
@@ -118,7 +118,7 @@
     auto_transfer_replicas = true      :: boolean(),
     replica_partitions = []            :: ordsets:ordset(partition_id()),
     pending_transition_waiters = []    :: [{From::{pid(), reference()}, #set_view_group_req{}}],
-    update_listeners = dict:new()      :: dict(),
+    update_listeners = dict:new()      :: dict:dict(),
     compact_log_files = nil            :: 'nil' | {[[string()]], partition_seqs(), partition_versions()},
     timeout = ?DEFAULT_TIMEOUT         :: non_neg_integer() | 'infinity'
 }).
@@ -3136,7 +3136,7 @@
     }.
 
 
--spec notify_update_listeners(#state{}, dict(), #set_view_group{}) -> dict().
+-spec notify_update_listeners(#state{}, dict:dict(), #set_view_group{}) -> dict:dict().
 notify_update_listeners(State, Listeners, NewGroup) ->
     case dict:size(Listeners) == 0 of
     true ->
@@ -3175,7 +3175,7 @@
     end.
 
 
--spec error_notify_update_listeners(#state{}, dict(), monitor_error()) -> dict().
+-spec error_notify_update_listeners(#state{}, dict:dict(), monitor_error()) -> dict:dict().
 error_notify_update_listeners(State, Listeners, Error) ->
     _ = dict:fold(
         fun(Ref, #up_listener{pid = ListPid, partition = PartId}, _Acc) ->
diff -r -u couchdb.orig/src/couch_set_view/src/mapreduce_view.erl couchdb/src/couch_set_view/src/mapreduce_view.erl
--- couchdb.orig/src/couch_set_view/src/mapreduce_view.erl  2015-10-08 11:26:37.040295000 +0200
+++ couchdb/src/couch_set_view/src/mapreduce_view.erl   2015-10-07 22:05:56.157242000 +0200
@@ -109,7 +109,7 @@
     convert_primary_index_kvs_to_binary(Rest, Group, [{KeyBin, V} | Acc]).
 
 
--spec finish_build(#set_view_group{}, dict(), string()) ->
+-spec finish_build(#set_view_group{}, dict:dict(), string()) ->
                           {#set_view_group{}, pid()}.
 finish_build(Group, TmpFiles, TmpDir) ->
     #set_view_group{
diff -r -u couchdb.orig/src/couchdb/couch_btree.erl couchdb/src/couchdb/couch_btree.erl
--- couchdb.orig/src/couchdb/couch_btree.erl    2015-10-08 11:26:37.049320000 +0200
+++ couchdb/src/couchdb/couch_btree.erl 2015-10-07 22:01:05.191344000 +0200
@@ -388,13 +388,12 @@
     end.
 
 modify_node(Bt, RootPointerInfo, Actions, QueryOutput, Acc, PurgeFun, PurgeFunAcc, KeepPurging) ->
-    case RootPointerInfo of
+    {NodeType, NodeList} = case RootPointerInfo of
     nil ->
-        NodeType = kv_node,
-        NodeList = [];
+        {kv_node, []};
     _Tuple ->
         Pointer = element(1, RootPointerInfo),
-        {NodeType, NodeList} = get_node(Bt, Pointer)
+        get_node(Bt, Pointer)
     end,
 
     case NodeType of
@@ -989,13 +988,12 @@
 
 guided_purge(Bt, NodeState, GuideFun, GuideAcc) ->
     % inspired by modify_node/5
-    case NodeState of
+    {NodeType, NodeList} = case NodeState of
     nil ->
-        NodeType = kv_node,
-        NodeList = [];
+        {kv_node, []};
     _Tuple ->
         Pointer = element(1, NodeState),
-        {NodeType, NodeList} = get_node(Bt, Pointer)
+        get_node(Bt, Pointer)
     end,
     {ok, NewNodeList, GuideAcc2, Bt2, Go} =
     case NodeType of
diff -r -u couchdb.orig/src/couchdb/couch_compaction_daemon.erl couchdb/src/couchdb/couch_compaction_daemon.erl
--- couchdb.orig/src/couchdb/couch_compaction_daemon.erl    2015-10-08 11:26:37.049734000 +0200
+++ couchdb/src/couchdb/couch_compaction_daemon.erl 2015-10-07 22:02:15.620989000 +0200
@@ -142,14 +142,14 @@
         true ->
             {ok, DbCompactPid} = couch_db:start_compact(Db),
             TimeLeft = compact_time_left(Config),
-            case Config#config.parallel_view_compact of
+            ViewsMonRef = case Config#config.parallel_view_compact of
             true ->
                 ViewsCompactPid = spawn_link(fun() ->
                     maybe_compact_views(DbName, DDocNames, Config)
                 end),
-                ViewsMonRef = erlang:monitor(process, ViewsCompactPid);
+                erlang:monitor(process, ViewsCompactPid);
             false ->
-                ViewsMonRef = nil
+                nil
             end,
             DbMonRef = erlang:monitor(process, DbCompactPid),
             receive
[ 98%] Generating ebin/vtree_cleanup.beam
compile: warnings being treated as errors
/root/couchbase/geocouch/vtree/src/vtree_cleanup.erl:32: erlang:now/0: Deprecated BIF. See the "Time and Time Correction in Erlang" chapter of the ERTS User's Guide for more information.
/root/couchbase/geocouch/vtree/src/vtree_cleanup.erl:42: erlang:now/0: Deprecated BIF. See the "Time and Time Correction in Erlang" chapter of the ERTS User's Guide for more information.
../geocouch/build/vtree/CMakeFiles/vtree.dir/build.make:64: recipe for target '../geocouch/build/vtree/ebin/vtree_cleanup.beam' failed
gmake[4]: *** [../geocouch/build/vtree/ebin/vtree_cleanup.beam] Error 1
CMakeFiles/Makefile2:6702: recipe for target '../geocouch/build/vtree/CMakeFiles/vtree.dir/all' failed
gmake[3]: *** [../geocouch/build/vtree/CMakeFiles/vtree.dir/all] Error 2
diff -r -u geocouch.orig/gc-couchbase/src/spatial_view.erl geocouch/gc-couchbase/src/spatial_view.erl
--- geocouch.orig/gc-couchbase/src/spatial_view.erl 2015-10-08 11:29:05.323361000 +0200
+++ geocouch/gc-couchbase/src/spatial_view.erl  2015-10-07 22:17:09.741790000 +0200
@@ -166,7 +166,7 @@
 
 
 % Build the tree out of the sorted files
--spec finish_build(#set_view_group{}, dict(), string()) ->
+-spec finish_build(#set_view_group{}, dict:dict(), string()) ->
                           {#set_view_group{}, pid()}.
 finish_build(Group, TmpFiles, TmpDir) ->
     #set_view_group{
diff -r -u geocouch.orig/vtree/src/vtree_cleanup.erl geocouch/vtree/src/vtree_cleanup.erl
--- geocouch.orig/vtree/src/vtree_cleanup.erl   2015-10-08 11:29:05.327423000 +0200
+++ geocouch/vtree/src/vtree_cleanup.erl    2015-10-07 22:12:26.915600000 +0200
@@ -29,7 +29,7 @@
 cleanup(#vtree{root=nil}=Vt, _Nodes) ->
     Vt;
 cleanup(Vt, Nodes) ->
-    T1 = now(),
+    T1 = erlang:monotonic_time(seconds),
     Root = Vt#vtree.root,
     PartitionedNodes = [Nodes],
     KpNodes = cleanup_multiple(Vt, PartitionedNodes, [Root]),
@@ -39,7 +39,7 @@
                       vtree_modify:write_new_root(Vt, KpNodes)
               end,
     ?LOG_DEBUG("Cleanup took: ~ps~n",
-               [timer:now_diff(now(), T1)/1000000]),
+               [erlang:monotonic_time(seconds) - T1]),
     Vt#vtree{root=NewRoot}.
 
 -spec cleanup_multiple(Vt :: #vtree{}, ToCleanup :: [#kv_node{}],
diff -r -u geocouch.orig/vtree/src/vtree_delete.erl geocouch/vtree/src/vtree_delete.erl
--- geocouch.orig/vtree/src/vtree_delete.erl    2015-10-08 11:29:05.327537000 +0200
+++ geocouch/vtree/src/vtree_delete.erl 2015-10-07 22:13:51.733064000 +0200
@@ -30,7 +30,7 @@
 delete(#vtree{root=nil}=Vt, _Nodes) ->
     Vt;
 delete(Vt, Nodes) ->
-    T1 = now(),
+    T1 = erlang:monotonic_time(seconds),
     Root = Vt#vtree.root,
     PartitionedNodes = [Nodes],
     KpNodes = delete_multiple(Vt, PartitionedNodes, [Root]),
@@ -40,7 +40,7 @@
                       vtree_modify:write_new_root(Vt, KpNodes)
               end,
     ?LOG_DEBUG("Deletion took: ~ps~n",
-               [timer:now_diff(now(), T1)/1000000]),
+               [erlang:monotonic_time(seconds) - T1]),
     Vt#vtree{root=NewRoot}.
 
 
diff -r -u geocouch.orig/vtree/src/vtree_insert.erl geocouch/vtree/src/vtree_insert.erl
--- geocouch.orig/vtree/src/vtree_insert.erl    2015-10-08 11:29:05.327648000 +0200
+++ geocouch/vtree/src/vtree_insert.erl 2015-10-07 22:15:50.812447000 +0200
@@ -26,7 +26,7 @@
 insert(Vt, []) ->
     Vt;
 insert(#vtree{root=nil}=Vt, Nodes) ->
-    T1 = now(),
+    T1 = erlang:monotonic_time(seconds),
     % If we would do single inserts, the first node that was inserted would
     % have set the original Mbb `MbbO`
     MbbO = (hd(Nodes))#kv_node.key,
@@ -48,7 +48,7 @@
             ArbitraryBulkSize = round(math:log(Threshold)+50),
             Vt3 = insert_in_bulks(Vt2, Rest, ArbitraryBulkSize),
             ?LOG_DEBUG("Insertion into empty tree took: ~ps~n",
-                      [timer:now_diff(now(), T1)/1000000]),
+                      [erlang:monotonic_time(seconds) - T1]),
             ?LOG_DEBUG("Root pos: ~p~n", [(Vt3#vtree.root)#kp_node.childpointer]),
             Vt3;
         false ->
@@ -56,13 +56,13 @@
             Vt#vtree{root=Root}
     end;
 insert(Vt, Nodes) ->
-    T1 = now(),
+    T1 = erlang:monotonic_time(seconds),
     Root = Vt#vtree.root,
     PartitionedNodes = [Nodes],
     KpNodes = insert_multiple(Vt, PartitionedNodes, [Root]),
     NewRoot = vtree_modify:write_new_root(Vt, KpNodes),
     ?LOG_DEBUG("Insertion into existing tree took: ~ps~n",
-               [timer:now_diff(now(), T1)/1000000]),
+               [erlang:monotonic_time(seconds) - T1]),
     Vt#vtree{root=NewRoot}.
diff -u ns_server/deps/ale/src/ale.erl.orig ns_server/deps/ale/src/ale.erl
--- ns_server/deps/ale/src/ale.erl.orig 2015-10-07 22:19:28.730212000 +0200
+++ ns_server/deps/ale/src/ale.erl      2015-10-07 22:20:09.788761000 +0200
@@ -45,12 +45,12 @@

 -include("ale.hrl").

--record(state, {sinks   :: dict(),
-                loggers :: dict()}).
+-record(state, {sinks   :: dict:dict(),
+                loggers :: dict:dict()}).

 -record(logger, {name      :: atom(),
                  loglevel  :: loglevel(),
-                 sinks     :: dict(),
+                 sinks     :: dict:dict(),
                  formatter :: module()}).

 -record(sink, {name     :: atom(),
==> ns_babysitter (compile)
src/ns_crash_log.erl:18: type queue() undefined
../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter.dir/build.make:49: recipe for target '../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter' failed
gmake[4]: *** [../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter] Error 1
CMakeFiles/Makefile2:7484: recipe for target '../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter.dir/all' failed
gmake[3]: *** [../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter.dir/all] Error 2
diff -r -u ns_server.orig/deps/ale/src/ale.erl ns_server/deps/ale/src/ale.erl
--- ns_server.orig/deps/ale/src/ale.erl 2015-10-08 11:31:20.520281000 +0200
+++ ns_server/deps/ale/src/ale.erl  2015-10-07 22:20:09.788761000 +0200
@@ -45,12 +45,12 @@
 
 -include("ale.hrl").
 
--record(state, {sinks   :: dict(),
-                loggers :: dict()}).
+-record(state, {sinks   :: dict:dict(),
+                loggers :: dict:dict()}).
 
 -record(logger, {name      :: atom(),
                  loglevel  :: loglevel(),
-                 sinks     :: dict(),
+                 sinks     :: dict:dict(),
                  formatter :: module()}).
 
 -record(sink, {name     :: atom(),
diff -r -u ns_server.orig/deps/ns_babysitter/src/ns_crash_log.erl ns_server/deps/ns_babysitter/src/ns_crash_log.erl
--- ns_server.orig/deps/ns_babysitter/src/ns_crash_log.erl  2015-10-08 11:31:20.540433000 +0200
+++ ns_server/deps/ns_babysitter/src/ns_crash_log.erl   2015-10-07 22:21:45.292975000 +0200
@@ -13,9 +13,9 @@
 -define(MAX_CRASHES_LEN, 100).
 
 -record(state, {file_path :: file:filename(),
-                crashes :: queue(),
+                crashes :: queue:queue(),
                 crashes_len :: non_neg_integer(),
-                crashes_saved :: queue(),
+                crashes_saved :: queue:queue(),
                 consumer_from = undefined :: undefined | {pid(), reference()},
                 consumer_mref = undefined :: undefined | reference()
                }).
diff -r -u ns_server.orig/include/remote_clusters_info.hrl ns_server/include/remote_clusters_info.hrl
--- ns_server.orig/include/remote_clusters_info.hrl 2015-10-08 11:31:20.544760000 +0200
+++ ns_server/include/remote_clusters_info.hrl  2015-10-07 22:22:48.541494000 +0200
@@ -20,6 +20,6 @@
                         cluster_cert :: binary() | undefined,
                         server_list_nodes :: [#remote_node{}],
                         bucket_caps :: [binary()],
-                        raw_vbucket_map :: dict(),
-                        capi_vbucket_map :: dict(),
+                        raw_vbucket_map :: dict:dict(),
+                        capi_vbucket_map :: dict:dict(),
                         cluster_version :: {integer(), integer()}}).
diff -r -u ns_server.orig/src/auto_failover.erl ns_server/src/auto_failover.erl
--- ns_server.orig/src/auto_failover.erl    2015-10-08 11:31:21.396519000 +0200
+++ ns_server/src/auto_failover.erl 2015-10-08 11:19:43.710301000 +0200
@@ -336,7 +336,7 @@
 %%
 
 %% @doc Returns a list of nodes that should be active, but are not running.
--spec actual_down_nodes(dict(), [atom()], [{atom(), term()}]) -> [atom()].
+-spec actual_down_nodes(dict:dict(), [atom()], [{atom(), term()}]) -> [atom()].
 actual_down_nodes(NodesDict, NonPendingNodes, Config) ->
     % Get all buckets
     BucketConfigs = ns_bucket:get_buckets(Config),
diff -r -u ns_server.orig/src/dcp_upgrade.erl ns_server/src/dcp_upgrade.erl
--- ns_server.orig/src/dcp_upgrade.erl  2015-10-08 11:31:21.400562000 +0200
+++ ns_server/src/dcp_upgrade.erl   2015-10-08 11:19:47.370353000 +0200
@@ -37,7 +37,7 @@
                 num_buckets :: non_neg_integer(),
                 bucket :: bucket_name(),
                 bucket_config :: term(),
-                progress :: dict(),
+                progress :: dict:dict(),
                 workers :: [pid()]}).
 
 start_link(Buckets) ->
diff -r -u ns_server.orig/src/janitor_agent.erl ns_server/src/janitor_agent.erl
--- ns_server.orig/src/janitor_agent.erl    2015-10-08 11:31:21.401859000 +0200
+++ ns_server/src/janitor_agent.erl 2015-10-08 11:18:09.979728000 +0200
@@ -43,7 +43,7 @@
                 rebalance_status = finished :: in_process | finished,
                 replicators_primed :: boolean(),
 
-                apply_vbucket_states_queue :: queue(),
+                apply_vbucket_states_queue :: queue:queue(),
                 apply_vbucket_states_worker :: undefined | pid(),
                 rebalance_subprocesses_registry :: pid()}).
 
diff -r -u ns_server.orig/src/menelaus_web_alerts_srv.erl ns_server/src/menelaus_web_alerts_srv.erl
--- ns_server.orig/src/menelaus_web_alerts_srv.erl  2015-10-08 11:31:21.405690000 +0200
+++ ns_server/src/menelaus_web_alerts_srv.erl   2015-10-08 10:58:15.641331000 +0200
@@ -219,7 +219,7 @@
 
 %% @doc if listening on a non localhost ip, detect differences between
 %% external listening host and current node host
--spec check(atom(), dict(), list(), [{atom(),number()}]) -> dict().
+-spec check(atom(), dict:dict(), list(), [{atom(),number()}]) -> dict:dict().
 check(ip, Opaque, _History, _Stats) ->
     {_Name, Host} = misc:node_name_host(node()),
     case can_listen(Host) of
@@ -290,7 +290,7 @@
 
 %% @doc only check for disk usage if there has been no previous
 %% errors or last error was over the timeout ago
--spec hit_rate_limit(atom(), dict()) -> true | false.
+-spec hit_rate_limit(atom(), dict:dict()) -> true | false.
 hit_rate_limit(Key, Dict) ->
     case dict:find(Key, Dict) of
         error ->
@@ -355,7 +355,7 @@
 
 
 %% @doc list of buckets thats measured stats have increased
--spec stat_increased(dict(), dict()) -> list().
+-spec stat_increased(dict:dict(), dict:dict()) -> list().
 stat_increased(New, Old) ->
     [Bucket || {Bucket, Val} <- dict:to_list(New), increased(Bucket, Val, Old)].
 
@@ -392,7 +392,7 @@
 
 
 %% @doc Lookup old value and test for increase
--spec increased(string(), integer(), dict()) -> true | false.
+-spec increased(string(), integer(), dict:dict()) -> true | false.
 increased(Key, Val, Dict) ->
     case dict:find(Key, Dict) of
         error ->
diff -r -u ns_server.orig/src/misc.erl ns_server/src/misc.erl
--- ns_server.orig/src/misc.erl 2015-10-08 11:31:21.407175000 +0200
+++ ns_server/src/misc.erl  2015-10-08 10:55:15.167246000 +0200
@@ -54,7 +54,7 @@
 randomize() ->
     case get(random_seed) of
         undefined ->
-            random:seed(erlang:now());
+            random:seed(erlang:timestamp());
         _ ->
             ok
     end.
@@ -303,8 +303,8 @@
 
 position(E, [_|List], N) -> position(E, List, N+1).
 
-now_int()   -> time_to_epoch_int(now()).
-now_float() -> time_to_epoch_float(now()).
+now_int()   -> time_to_epoch_int(erlang:timestamp()).
+now_float() -> time_to_epoch_float(erlang:timestamp()).
 
 time_to_epoch_int(Time) when is_integer(Time) or is_float(Time) ->
   Time;
@@ -1239,7 +1239,7 @@
 
 
 %% Get an item from from a dict, if it doesnt exist return default
--spec dict_get(term(), dict(), term()) -> term().
+-spec dict_get(term(), dict:dict(), term()) -> term().
 dict_get(Key, Dict, Default) ->
     case dict:is_key(Key, Dict) of
         true -> dict:fetch(Key, Dict);
diff -r -u ns_server.orig/src/ns_doctor.erl ns_server/src/ns_doctor.erl
--- ns_server.orig/src/ns_doctor.erl    2015-10-08 11:31:21.410269000 +0200
+++ ns_server/src/ns_doctor.erl 2015-10-08 10:53:49.208657000 +0200
@@ -30,8 +30,8 @@
          get_tasks_version/0, build_tasks_list/2]).
 
 -record(state, {
-          nodes :: dict(),
-          tasks_hash_nodes :: undefined | dict(),
+          nodes :: dict:dict(),
+          tasks_hash_nodes :: undefined | dict:dict(),
           tasks_hash :: undefined | integer(),
           tasks_version :: undefined | string()
          }).
@@ -112,14 +112,14 @@
     RV = case dict:find(Node, Nodes) of
              {ok, Status} ->
                  LiveNodes = [node() | nodes()],
-                 annotate_status(Node, Status, now(), LiveNodes);
+                 annotate_status(Node, Status, erlang:timestamp(), LiveNodes);
              _ ->
                  []
          end,
     {reply, RV, State};
 
 handle_call(get_nodes, _From, #state{nodes=Nodes} = State) ->
-    Now = erlang:now(),
+    Now = erlang:timestamp(),
     LiveNodes = [node()|nodes()],
     Nodes1 = dict:map(
                fun (Node, Status) ->
@@ -210,7 +210,7 @@
         orelse OldReadyBuckets =/= NewReadyBuckets.
 
 update_status(Name, Status0, Dict) ->
-    Status = [{last_heard, erlang:now()} | Status0],
+    Status = [{last_heard, erlang:timestamp()} | Status0],
     PrevStatus = case dict:find(Name, Dict) of
                      {ok, V} -> V;
                      error -> []
diff -r -u ns_server.orig/src/ns_janitor_map_recoverer.erl ns_server/src/ns_janitor_map_recoverer.erl
--- ns_server.orig/src/ns_janitor_map_recoverer.erl 2015-10-08 11:31:21.410945000 +0200
+++ ns_server/src/ns_janitor_map_recoverer.erl  2015-10-08 10:52:23.927033000 +0200
@@ -79,7 +79,7 @@
     end.
 
 -spec recover_map([{non_neg_integer(), node()}],
-                  dict(),
+                  dict:dict(),
                   boolean(),
                   non_neg_integer(),
                   pos_integer(),
diff -r -u ns_server.orig/src/ns_memcached.erl ns_server/src/ns_memcached.erl
--- ns_server.orig/src/ns_memcached.erl 2015-10-08 11:31:21.411920000 +0200
+++ ns_server/src/ns_memcached.erl  2015-10-08 10:51:08.281320000 +0200
@@ -65,9 +65,9 @@
           running_very_heavy = 0,
           %% NOTE: otherwise dialyzer seemingly thinks it's possible
           %% for queue fields to be undefined
-          fast_calls_queue = impossible :: queue(),
-          heavy_calls_queue = impossible :: queue(),
-          very_heavy_calls_queue = impossible :: queue(),
+          fast_calls_queue = impossible :: queue:queue(),
+          heavy_calls_queue = impossible :: queue:queue(),
+          very_heavy_calls_queue = impossible :: queue:queue(),
           status :: connecting | init | connected | warmed,
           start_time::tuple(),
           bucket::nonempty_string(),
diff -r -u ns_server.orig/src/ns_orchestrator.erl ns_server/src/ns_orchestrator.erl
--- ns_server.orig/src/ns_orchestrator.erl  2015-10-08 11:31:21.412957000 +0200
+++ ns_server/src/ns_orchestrator.erl   2015-10-08 10:45:51.967739000 +0200
@@ -251,7 +251,7 @@
                             not_needed |
                             {error, {failed_nodes, [node()]}}
   when UUID :: binary(),
-       RecoveryMap :: dict().
+       RecoveryMap :: dict:dict().
 start_recovery(Bucket) ->
     wait_for_orchestrator(),
     gen_fsm:sync_send_event(?SERVER, {start_recovery, Bucket}).
@@ -260,7 +260,7 @@
   when Status :: [{bucket, bucket_name()} |
                   {uuid, binary()} |
                   {recovery_map, RecoveryMap}],
-       RecoveryMap :: dict().
+       RecoveryMap :: dict:dict().
 recovery_status() ->
     case is_recovery_running() of
         false ->
@@ -271,7 +271,7 @@
     end.
 
 -spec recovery_map(bucket_name(), UUID) -> bad_recovery | {ok, RecoveryMap}
-  when RecoveryMap :: dict(),
+  when RecoveryMap :: dict:dict(),
        UUID :: binary().
 recovery_map(Bucket, UUID) ->
     wait_for_orchestrator(),
@@ -1062,7 +1062,7 @@
             {next_state, FsmState, State#janitor_state{remaining_buckets = NewBucketRequests}}
     end.
 
--spec update_progress(dict()) -> ok.
+-spec update_progress(dict:dict()) -> ok.
 update_progress(Progress) ->
     gen_fsm:send_event(?SERVER, {update_progress, Progress}).
 
diff -r -u ns_server.orig/src/ns_replicas_builder.erl ns_server/src/ns_replicas_builder.erl
--- ns_server.orig/src/ns_replicas_builder.erl  2015-10-08 11:31:21.413763000 +0200
+++ ns_server/src/ns_replicas_builder.erl   2015-10-08 10:43:03.655761000 +0200
@@ -153,7 +153,7 @@
             observe_wait_all_done_old_style_loop(Bucket, SrcNode, Sleeper, NewTapNames, SleepsSoFar+1)
     end.
 
--spec filter_true_producers(list(), set(), binary()) -> [binary()].
+-spec filter_true_producers(list(), set:set(), binary()) -> [binary()].
 filter_true_producers(PList, TapNamesSet, StatName) ->
     [TapName
      || {<<"eq_tapq:replication_", Key/binary>>, <<"true">>} <- PList,
diff -r -u ns_server.orig/src/ns_vbucket_mover.erl ns_server/src/ns_vbucket_mover.erl
--- ns_server.orig/src/ns_vbucket_mover.erl 2015-10-08 11:31:21.415305000 +0200
+++ ns_server/src/ns_vbucket_mover.erl  2015-10-08 10:42:02.815008000 +0200
@@ -36,14 +36,14 @@
 
 -export([inhibit_view_compaction/3]).
 
--type progress_callback() :: fun((dict()) -> any()).
+-type progress_callback() :: fun((dict:dict()) -> any()).
 
 -record(state, {bucket::nonempty_string(),
                 disco_events_subscription::pid(),
-                map::array(),
+                map::array:array(),
                 moves_scheduler_state,
                 progress_callback::progress_callback(),
-                all_nodes_set::set(),
+                all_nodes_set::set:set(),
                 replication_type::bucket_replication_type()}).
 
 %%
@@ -218,14 +218,14 @@
 
 %% @private
 %% @doc Convert a map array back to a map list.
--spec array_to_map(array()) -> vbucket_map().
+-spec array_to_map(array:array()) -> vbucket_map().
 array_to_map(Array) ->
     array:to_list(Array).
 
 %% @private
 %% @doc Convert a map, which is normally a list, into an array so that
 %% we can randomly access the replication chains.
--spec map_to_array(vbucket_map()) -> array().
+-spec map_to_array(vbucket_map()) -> array:array().
 map_to_array(Map) ->
     array:fix(array:from_list(Map)).
 
diff -r -u ns_server.orig/src/path_config.erl ns_server/src/path_config.erl
--- ns_server.orig/src/path_config.erl  2015-10-08 11:31:21.415376000 +0200
+++ ns_server/src/path_config.erl   2015-10-08 10:38:48.687500000 +0200
@@ -53,7 +53,7 @@
     filename:join(component_path(NameAtom), SubPath).
 
 tempfile(Dir, Prefix, Suffix) ->
-    {_, _, MicroSecs} = erlang:now(),
+    {_, _, MicroSecs} = erlang:timestamp(),
     Pid = os:getpid(),
     Filename = Prefix ++ integer_to_list(MicroSecs) ++ "_" ++
                Pid ++ Suffix,
diff -r -u ns_server.orig/src/recoverer.erl ns_server/src/recoverer.erl
--- ns_server.orig/src/recoverer.erl    2015-10-08 11:31:21.415655000 +0200
+++ ns_server/src/recoverer.erl 2015-10-08 10:36:46.182185000 +0200
@@ -23,16 +23,16 @@
          is_recovery_complete/1]).
 
 -record(state, {bucket_config :: list(),
-                recovery_map :: dict(),
-                post_recovery_chains :: dict(),
-                apply_map :: array(),
-                effective_map :: array()}).
+                recovery_map :: dict:dict(),
+                post_recovery_chains :: dict:dict(),
+                apply_map :: array:array(),
+                effective_map :: array:array()}).
 
 -spec start_recovery(BucketConfig) ->
                             {ok, RecoveryMap, {Servers, BucketConfig}, #state{}}
                                 | not_needed
   when BucketConfig :: list(),
-       RecoveryMap :: dict(),
+       RecoveryMap :: dict:dict(),
        Servers :: [node()].
 start_recovery(BucketConfig) ->
     NumVBuckets = proplists:get_value(num_vbuckets, BucketConfig),
@@ -92,7 +92,7 @@
                     effective_map=array:from_list(OldMap)}}
     end.
 
--spec get_recovery_map(#state{}) -> dict().
+-spec get_recovery_map(#state{}) -> dict:dict().
 get_recovery_map(#state{recovery_map=RecoveryMap}) ->
     RecoveryMap.
 
@@ -205,7 +205,7 @@
 -define(MAX_NUM_SERVERS, 50).
 
 compute_recovery_map_test_() ->
-    random:seed(now()),
+    random:seed(erlang:timestamp()),
 
     {timeout, 100,
      {inparallel,
diff -r -u ns_server.orig/src/remote_clusters_info.erl ns_server/src/remote_clusters_info.erl
--- ns_server.orig/src/remote_clusters_info.erl 2015-10-08 11:31:21.416143000 +0200
+++ ns_server/src/remote_clusters_info.erl  2015-10-08 10:37:57.095653000 +0200
@@ -121,10 +121,10 @@
           {node, node(), remote_clusters_info_config_update_interval}, 10000)).
 
 -record(state, {cache_path :: string(),
-                scheduled_config_updates :: set(),
-                remote_bucket_requests :: dict(),
-                remote_bucket_waiters :: dict(),
-                remote_bucket_waiters_trefs :: dict()}).
+                scheduled_config_updates :: set:set(),
+                remote_bucket_requests :: dict:dict(),
+                remote_bucket_waiters :: dict:dict(),
+                remote_bucket_waiters_trefs :: dict:dict()}).
 
 start_link() ->
     gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
diff -r -u ns_server.orig/src/ringbuffer.erl ns_server/src/ringbuffer.erl
--- ns_server.orig/src/ringbuffer.erl   2015-10-08 11:31:21.416440000 +0200
+++ ns_server/src/ringbuffer.erl    2015-10-08 10:33:36.063532000 +0200
@@ -18,7 +18,7 @@
 -export([new/1, to_list/1, to_list/2, to_list/3, add/2]).
 
 % Create a ringbuffer that can hold at most Size items.
--spec new(integer()) -> queue().
+-spec new(integer()) -> queue:queue().
 new(Size) ->
     queue:from_list([empty || _ <- lists:seq(1, Size)]).
 
@@ -26,15 +26,15 @@
 % Convert the ringbuffer to a list (oldest items first).
 -spec to_list(integer()) -> list().
 to_list(R) -> to_list(R, false).
--spec to_list(queue(), W) -> list() when is_subtype(W, boolean());
-             (integer(), queue()) -> list().
+-spec to_list(queue:queue(), W) -> list() when is_subtype(W, boolean());
+             (integer(), queue:queue()) -> list().
 to_list(R, WithEmpties) when is_boolean(WithEmpties) ->
     queue:to_list(to_queue(R));
 
 % Get at most the N newest items from the given ringbuffer (oldest first).
 to_list(N, R) -> to_list(N, R, false).
 
--spec to_list(integer(), queue(), boolean()) -> list().
+-spec to_list(integer(), queue:queue(), boolean()) -> list().
 to_list(N, R, WithEmpties) ->
     L =  lists:reverse(queue:to_list(to_queue(R, WithEmpties))),
     lists:reverse(case (catch lists:split(N, L)) of
@@ -43,14 +43,14 @@
                   end).
 
 % Add an element to a ring buffer.
--spec add(term(), queue()) -> queue().
+-spec add(term(), queue:queue()) -> queue:queue().
 add(E, R) ->
     queue:in(E, queue:drop(R)).
 
 % private
--spec to_queue(queue()) -> queue().
+-spec to_queue(queue:queue()) -> queue:queue().
 to_queue(R) -> to_queue(R, false).
 
--spec to_queue(queue(), boolean()) -> queue().
+-spec to_queue(queue:queue(), boolean()) -> queue:queue().
 to_queue(R, false) -> queue:filter(fun(X) -> X =/= empty end, R);
 to_queue(R, true) -> R.
diff -r -u ns_server.orig/src/vbucket_map_mirror.erl ns_server/src/vbucket_map_mirror.erl
--- ns_server.orig/src/vbucket_map_mirror.erl   2015-10-08 11:31:21.417885000 +0200
+++ ns_server/src/vbucket_map_mirror.erl    2015-10-07 22:27:21.036638000 +0200
@@ -119,7 +119,7 @@
       end).
 
 -spec node_vbuckets_dict_or_not_present(bucket_name()) ->
-                                               dict() | no_map | not_present.
+                                               dict:dict() | no_map | not_present.
 node_vbuckets_dict_or_not_present(BucketName) ->
     case ets:lookup(vbucket_map_mirror, BucketName) of
         [] ->
diff -r -u ns_server.orig/src/vbucket_move_scheduler.erl ns_server/src/vbucket_move_scheduler.erl
--- ns_server.orig/src/vbucket_move_scheduler.erl   2015-10-08 11:31:21.418054000 +0200
+++ ns_server/src/vbucket_move_scheduler.erl    2015-10-07 22:26:10.523913000 +0200
@@ -128,7 +128,7 @@
           backfills_limit :: non_neg_integer(),
           moves_before_compaction :: non_neg_integer(),
           total_in_flight = 0 :: non_neg_integer(),
-          moves_left_count_per_node :: dict(), % node() -> non_neg_integer()
+          moves_left_count_per_node :: dict:dict(), % node() -> non_neg_integer()
           moves_left :: [move()],
 
           %% pending moves when current master is undefined For them
@@ -136,13 +136,13 @@
           %% And that's first moves that we ever consider doing
           moves_from_undefineds :: [move()],
 
-          compaction_countdown_per_node :: dict(), % node() -> non_neg_integer()
-          in_flight_backfills_per_node :: dict(),  % node() -> non_neg_integer() (I.e. counts current moves)
-          in_flight_per_node :: dict(),            % node() -> non_neg_integer() (I.e. counts current moves)
-          in_flight_compactions :: set(),          % set of nodes
+          compaction_countdown_per_node :: dict:dict(), % node() -> non_neg_integer()
+          in_flight_backfills_per_node :: dict:dict(),  % node() -> non_neg_integer() (I.e. counts current moves)
+          in_flight_per_node :: dict:dict(),            % node() -> non_neg_integer() (I.e. counts current moves)
+          in_flight_compactions :: set:set(),          % set of nodes
 
-          initial_move_counts :: dict(),
-          left_move_counts :: dict(),
+          initial_move_counts :: dict:dict(),
+          left_move_counts :: dict:dict(),
           inflight_moves_limit :: non_neg_integer()
          }).
 
diff -r -u ns_server.orig/src/xdc_vbucket_rep_xmem.erl ns_server/src/xdc_vbucket_rep_xmem.erl
--- ns_server.orig/src/xdc_vbucket_rep_xmem.erl 2015-10-08 11:31:21.419959000 +0200
+++ ns_server/src/xdc_vbucket_rep_xmem.erl  2015-10-07 22:24:35.829228000 +0200
@@ -134,7 +134,7 @@
     end.
 
 %% internal
--spec categorise_statuses_to_dict(list(), list()) -> {dict(), dict()}.
+-spec categorise_statuses_to_dict(list(), list()) -> {dict:dict(), dict:dict()}.
 categorise_statuses_to_dict(Statuses, MutationsList) ->
     {ErrorDict, ErrorKeys, _}
         = lists:foldl(fun(Status, {DictAcc, ErrorKeyAcc, CountAcc}) ->
@@ -164,7 +164,7 @@
                       lists:reverse(Statuses)),
     {ErrorDict, ErrorKeys}.
 
--spec lookup_error_dict(term(), dict()) -> integer().
+-spec lookup_error_dict(term(), dict:dict()) -> integer().
 lookup_error_dict(Key, ErrorDict)->
      case dict:find(Key, ErrorDict) of
          error ->
@@ -173,7 +173,7 @@
              V
      end.
 
--spec convert_error_dict_to_string(dict()) -> list().
+-spec convert_error_dict_to_string(dict:dict()) -> list().
 convert_error_dict_to_string(ErrorKeyDict) ->
     StrHead = case dict:size(ErrorKeyDict) > 0 of
                   true ->

Next!

# gmake
[no errors]

There are no next error. Let's see the destination folder:

# ll install/
total 32
drwxr-xr-x  3 root  wheel  1536 Oct  8 11:21 bin/
drwxr-xr-x  2 root  wheel   512 Oct  8 11:21 doc/
drwxr-xr-x  5 root  wheel   512 Oct  8 11:21 etc/
drwxr-xr-x  6 root  wheel  1024 Oct  8 11:21 lib/
drwxr-xr-x  3 root  wheel   512 Oct  8 11:21 man/
drwxr-xr-x  2 root  wheel   512 Oct  8 11:21 samples/
drwxr-xr-x  3 root  wheel   512 Oct  8 11:21 share/
drwxr-xr-x  3 root  wheel   512 Oct  8 11:21 var/

Sucess. Couchbase is built.

Running the server

# bin/couchbase-server
bin/couchbase-server: Command not found.

What the hell is that file?

root@couchbasebsd:~/couchbase # head bin/couchbase-server
#! /bin/bash
#
# Copyright (c) 2010-2011, Couchbase, Inc.
# All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0

Yum. Sweet bash. Let's dirty our system a bit:

# ln -s /usr/local/bin/bash /bin/bash

Let's try running the server again:

# bin/couchbase-server
Erlang/OTP 18 [erts-7.0.1] [source] [64-bit] [smp:2:2] [async-threads:16] [hipe] [kernel-poll:false]

Eshell V7.0.1  (abort with ^G)

Nothing complained. Let's see if the web UI is present.

# sockstat -4l | grep 8091
#

It's not.

What's in the log?

[user:critical,2015-10-08T11:42:24.599,ns_1@127.0.0.1:ns_server_sup<0.271.0>:menelaus_sup:start_link:51]Couchbase Server has failed to start on web port 8091 on node 'ns_1@127.0.0.1'. Perhaps another process has taken port 8091 already? If so, please stop that process first before trying again.
[ns_server:info,2015-10-08T11:42:24.600,ns_1@127.0.0.1:mb_master<0.319.0>:mb_master:terminate:299]Synchronously shutting down child mb_master_sup

Server could not start. Let's see more logs:

# /root/couchbase/install/bin/cbbrowse_logs
[...]
[error_logger:error,2015-10-08T11:57:36.860,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
     Supervisor: {local,ns_ssl_services_sup}
     Context:    start_error
     Reason:     {bad_generate_cert_exit,1,<<>>}
     Offender:   [{pid,undefined},
                  {id,ns_ssl_services_setup},
                  {mfargs,{ns_ssl_services_setup,start_link,[]}},
                  {restart_type,permanent},
                  {shutdown,1000},
                  {child_type,worker}]

bad_generate_cert_exit? Let's execute that program ourselves:

# bin/generate_cert
ELF binary type "0" not known.
bin/generate_cert: Exec format error. Binary file not executable.

# file bin/generate_cert
bin/generate_cert: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, BuildID[md5/uuid]=48f74c5e6c624dfe8ecba6d8687f151b, not stripped

Nothing beats building some software on FreeBSD and ending up with Linux binaries.

Where is the source of that program?

 # find . -name '*generate_cert*'
./ns_server/deps/generate_cert
./ns_server/deps/generate_cert/generate_cert.go
./ns_server/priv/i386-darwin-generate_cert
./ns_server/priv/i386-linux-generate_cert
./ns_server/priv/i386-win32-generate_cert.exe
./install/bin/generate_cert

Let's build it and replace the Linux one.

# cd ns_server/deps/generate_cert/
# go build
# file generate_cert
generate_cert: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, not stripped
# cp generate_cert ../../../install/bin/

# cd ../gozip/
# go build
# cp gozip ../../../install/bin

Let's run the server again.

# # bin/couchbase-server
Erlang/OTP 18 [erts-7.0.1] [source] [64-bit] [smp:2:2] [async-threads:16] [hipe] [kernel-poll:false]

Eshell V7.0.1  (abort with ^G)

# sockstat -4l | grep 8091
root     beam.smp   93667 39 tcp4   *:8091                *:*

This time the web UI is present.

The first page you see when you installed a couchbase server

Let's follow the setup and create a bucket.

Couchbase cluster overview

So far, everything seems to be working.

Interracting with the server via the CLI is also working.

# bin/couchbase-cli bucket-list -u Administrator -p abcd1234 -c 127.0.0.1
default
 bucketType: membase
 authType: sasl
 saslPassword:
 numReplicas: 1
 ramQuota: 507510784
 ramUsed: 31991104
test_bucket
 bucketType: membase
 authType: sasl
 saslPassword:
 numReplicas: 1
 ramQuota: 104857600
 ramUsed: 31991008

Conclusion

I obviously haven't tested every feature of the server, but as far as I demonstrated, it's perfectly capable of running on FreeBSD.

GLIB header files do not match library version

Not so frequently asked questions and stuff: 

The FreeBSD logoImage

Let's try to build graphics/gdk-pixbuf2 on FreeBSD:

# make -C /usr/ports/graphics/gdk-pixbuf2 install clean
[...]
checking for GLIB - version >= 2.37.6... *** GLIB header files (version 2.36.3) do not match
*** library (version 2.44.1)
no
configure: error:
*** GLIB 2.37.6 or better is required. The latest version of
*** GLIB is always available from ftp://ftp.gtk.org/pub/gtk/.
===>  Script "configure" failed unexpectedly.
Please report the problem to gnome@FreeBSD.org [maintainer] and attach the
"/usr/ports/graphics/gdk-pixbuf2/work/gdk-pixbuf-2.32.1/config.log" including
the output of the failure of your make command. Also, it might be a good idea
to provide an overview of all packages installed on your system (e.g. a
/usr/local/sbin/pkg-static info -g -Ea).
*** Error code 1

Stop.
make[1]: stopped in /usr/ports/graphics/gdk-pixbuf2
*** Error code 1

Stop.
make: stopped in /usr/ports/graphics/gdk-pixbuf2
# pkg info | grep glib
glib-2.44.1_1                  Some useful routines of C programming (current stable version)

Huh? Okay. I was pretty sure 2.44 > 2.37.

Who does that check?

# grep -R 'GLIB header files' *
work/gdk-pixbuf-2.32.1/aclocal.m4:      printf("*** GLIB header files (version %d.%d.%d) do not match\n",
work/gdk-pixbuf-2.32.1/configure:      printf("*** GLIB header files (version %d.%d.%d) do not match\n",
work/gdk-pixbuf-2.32.1/config.log:|       printf("*** GLIB header files (version %d.%d.%d) do not match\n",
work/gdk-pixbuf-2.32.1/configure.libtool.bak:      printf("*** GLIB header files (version %d.%d.%d) do not match\n",
# cat work/gdk-pixbuf-2.32.1/aclocal.m4
[...]
  else if ((glib_major_version != GLIB_MAJOR_VERSION) ||
           (glib_minor_version != GLIB_MINOR_VERSION) ||
           (glib_micro_version != GLIB_MICRO_VERSION))
    {
      printf("*** GLIB header files (version %d.%d.%d) do not match\n",
             GLIB_MAJOR_VERSION, GLIB_MINOR_VERSION, GLIB_MICRO_VERSION);
      printf("*** library (version %d.%d.%d)\n",
             glib_major_version, glib_minor_version, glib_micro_version);
    }
[...]

Where are those constants defined?

# grep -R -A 2 GLIB_MAJOR_VERSION /usr/local/include/*
/usr/local/include/glib-2.0/glibconfig.h:#define GLIB_MAJOR_VERSION 2
/usr/local/include/glib-2.0/glibconfig.h:#define GLIB_MINOR_VERSION 36
/usr/local/include/glib-2.0/glibconfig.h:#define GLIB_MICRO_VERSION 3

Where does this file come from?

# pkg which /usr/local/include/glib-2.0/glibconfig.h
/usr/local/include/glib-2.0/glibconfig.h was not found in the database

Nowhere.

Did the port even installed it?

# grep glibconfig.h /usr/ports/devel/glib20/pkg-plist
lib/glib-2.0/include/glibconfig.h

No.

Let's delete it and rebuild the port just in case.

# rm /usr/local/include/glib-2.0/glibconfig.h
# make -C /usr/ports/devel/glib20 reinstall clean

Let's try our build again.

# make -C /usr/ports/graphics/gdk-pixbuf2 install clean
[...]
checking for GLIB - version >= 2.37.6... yes (version 2.44.1)
[...]
===>  Checking if gdk-pixbuf2 already installed
===>   Registering installation for gdk-pixbuf2-2.32.1
Installing gdk-pixbuf2-2.32.1...
===>  Cleaning for gdk-pixbuf2-2.32.1

Success!

FreeBSD, MySQL and the story of the unlinked named pipe

Not so frequently asked questions and stuff: 

The FreeBSD logoMySQL&#039;s logo

Introduction

One of my applications was using named pipes to feed data to a MySQL server while it was still being downloaded from a FTP server. One of those processes crashed, and MySQL was still waiting from data from a FIFO file that was deleted.

How to reproduce the situation

Create a table:

mysql> CREATE TABLE t (c char(20) DEFAULT NULL);

Create a named pipe on the system:

# mkfifo /tmp/ghost.fifo
# chown mysql /tmp/ghost.fifo

Load data from the named pipe:

mysql> LOAD DATA INFILE '/tmp/ghost.fifo' INTO TABLE t;

The request is now waiting for a writer to write something into the pipe.

Remove the file:

# rm /tmp/ghost.fifo

Now no-one can write anything into the pipe, since it isn't linked anywhere on the filesystem.

Investigating the problem

Determining what MySQL is doing

Now let's say you don't know what causes the problem. Your MySQL server is not behaving correctly. Requests using table t are waiting indefinitely to lock the table and nothing happens.

Let's check the process list:

mysql> show full processlist\G
*************************** 1. row ***************************
     Id: 1
   User: root
   Host: localhost
     db: fifo
Command: Query
   Time: 13
  State: System lock
   Info: LOAD DATA INFILE '/tmp/ghost.fifo' INTO TABLE t
1 rows in set (0.00 sec)

Our request is indeed waiting for the system to unlock its thread.

Let's get more information in the InnoDB status report:

mysql> show engine innodb status\G
[...]
------------
TRANSACTIONS
------------
Trx id counter 2319
Purge done for trx's n:o < 2313 undo n:o < 0 state: running but idle
History list length 3
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
mysql tables in use 1, locked 1
MySQL thread id 1, OS thread handle 0x829e76000, query id 9 localhost root System lock
LOAD DATA INFILE '/tmp/ghost.fifo' INTO TABLE t
[...]

That gives us the thread handle of the request.

Let's create a dump of the server's memory, and see what it's doing:

# gcore -c mysqld.core `pgrep mysqld`

Now, let's find out what that "OS thread handle" is.
The line is printed in method thd_security_context in sql/sql_class.cc

extern "C"
char *thd_security_context(THD *thd, char *buffer, unsigned int length,
                           unsigned int max_query_len) {
[...]

  len= my_snprintf(header, sizeof(header),
                   "MySQL thread id %lu, OS thread handle 0x%lx, query id %lu",
                   thd->thread_id, (ulong) thd->real_id, (ulong) thd->query_id);
  str.length(0);
  str.append(header, len);

[...]
}

Class THD's definition in sql/sql_class.h tells us that this attribute is a pointer to a pthread_t struct.


class THD :public MDL_context_owner,
           public Statement,
           public Open_tables_state
{
[...]
public:
    pthread_t real_id; /* For debugging */
[...]
}

That struct is typedefed in FreeBSD's sys/_pthreadtypes.h and defined in lib/libthr/thread/thr_private.h:

typedef struct  pthread                 *pthread_t;
/*
* Thread structure.
*/
struct pthread {
#define _pthread_startzero      tid
    /* Kernel thread id. */
    long                    tid;
#define TID_TERMINATED          1
[...]
}

The interesting information here is that the system thread id is the first element of the struct. Let's fetch that from the dump using the thread handle's memory address:

If you don't have gdb installed:

make -C /usr/ports/devel/gdb install clean

Then:

# gdb /usr/local/libexec/mysqld mysqld.core
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...(no debugging symbols found)...
Core was generated by `mysqld'.
[...]
#0  0x00000008020b689c in __error () from /lib/libthr.so.3
[New Thread 829e76c00 (LWP 100474/mysqld)]
[...]
[New Thread 802c06400 (LWP 100411/mysqld)]
(gdb) x 0x829e76000
0x829e76000:    0x0001886c

0x00018879 = 100460

Let's see if we have a thread with that id in the process:

# procstat -t `pgrep mysqld` | grep 100460
 1158 100460 mysqld           -                  0  120 sleep   fifoor

# ps -U mysql -H -O wchan | grep fifoor
1158 fifoor   -  I    0:00.03 /usr/local/libexec/mysqld --defaults-extra-file=/var/db/mysql/my.cnf --basedir=/usr/local --datadir=/var/db/mysql --plu

We do. Let's see in the dump what it's doing:

(gdb) info threads
  23 Thread 802c06400 (LWP 100411/mysqld)  0x000000080239453a in poll () from /lib/libc.so.7
  22 Thread 802c06800 (LWP 100413/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  21 Thread 802c07800 (LWP 100414/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  20 Thread 802c07c00 (LWP 100415/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  19 Thread 802c08000 (LWP 100416/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  18 Thread 802c08400 (LWP 100417/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  17 Thread 802c08800 (LWP 100418/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  16 Thread 802c08c00 (LWP 100419/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  15 Thread 802c09000 (LWP 100420/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  14 Thread 802c09400 (LWP 100421/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  13 Thread 802c09800 (LWP 100422/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  12 Thread 802c0a800 (LWP 100425/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  11 Thread 802c0ac00 (LWP 100426/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  10 Thread 802c0b000 (LWP 100427/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  9 Thread 802c0b400 (LWP 100428/mysqld)  0x00000008023fc06a in select () from /lib/libc.so.7
  8 Thread 802c0b800 (LWP 100429/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  7 Thread 802c0bc00 (LWP 100430/mysqld)  0x00000008023fc06a in select () from /lib/libc.so.7
  6 Thread 802c0c000 (LWP 100431/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  5 Thread 802c0c400 (LWP 100432/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  4 Thread 802c0c800 (LWP 100433/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3
  3 Thread 802c0d400 (LWP 100434/mysqld)  0x000000080233162a in _sigwait () from /lib/libc.so.7
  2 Thread 829e76000 (LWP 100460/mysqld)  0x00000008023f492a in open () from /lib/libc.so.7
* 1 Thread 829e76800 (LWP 100461/mysqld)  0x00000008020b689c in __error () from /lib/libthr.so.3

(gdb) thread 2
[Switching to thread 2 (Thread 829e76000 (LWP 100460/mysqld))]#0  0x00000008023f492a in open () from /lib/libc.so.7
(gdb) bt
#0  0x00000008023f492a in open () from /lib/libc.so.7
#1  0x00000008020ad715 in open () from /lib/libthr.so.3
#2  0x000000000088ee7d in my_open ()
#3  0x00000000007f35aa in mysql_load ()
#4  0x00000000006c7d04 in mysql_execute_command ()
#5  0x00000000006c5c55 in mysql_parse ()
#6  0x00000000006c4303 in dispatch_command ()
#7  0x00000000006c57dd in do_command ()
#8  0x00000000006a2aa6 in do_handle_one_connection ()
#9  0x00000000006a2909 in handle_one_connection ()
#10 0x0000000000a67c01 in pfs_spawn_thread ()
#11 0x00000008020ab4a4 in pthread_create () from /lib/libthr.so.3
#12 0x0000000000000000 in ?? ()
(gdb) frame 2
#2  0x000000000088ee7d in my_open ()

Let's see what my_open is all about:

mysys/my_open.c:

File my_open(const char *FileName, int Flags, myf MyFlags) {
[...]

It's a wrapper to the different compatible systems open functions.

Let's see the value of the first argument:

(gdb) info registers
rax            0x5      5
rbx            0x7ffffd1a3c80   140737439743104
rcx            0xa6a650 10921552
rdx            0x0      0
rsi            0x0      0
rdi            0x7ffffd1a3c80   140737439743104
rbp            0x7ffffd1a3710   0x7ffffd1a3710
rsp            0x7ffffd1a3700   0x7ffffd1a3700
r8             0x7ffffd1a3c30   140737439743024
r9             0x0      0
r10            0x0      0
r11            0x828140e68      35032141416
r12            0x828b0e000      35042418688
r13            0x1      1
r14            0x10     16
r15            0x14     20
rip            0x88ee7d 0x88ee7d <my_open+29>
eflags         0x246    582
cs             0x43     67
ss             0x3b     59
ds             0x0      0
es             0x0      0
fs             0x0      0
gs             0x0      0
(gdb) x/s 0x7ffffd1a3c80
0x7ffffd1a3c80:  "/tmp/ghost.fifo"

Yep, that's the one.

All this means that the process tried to open the named pipe, and that the thread is waiting for something in the kernel. This is confirmed by the fact that procstat told us before that the thread was waiting on a wait event called "fifoor".

Since the code is stuck in open, the file descriptor that will be returned is not created yet. That's why no fifo are present when you use procstat to list the process' open files.

Determining what the kernel is doing

Let's fire a kernel debugger and find our thread:

# kgdb
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...
[...]
#0  sched_switch (td=0xffffffff81641ff0, newtd=<value optimized out> flags=<value optimized out>) at /usr/src/sys/kern/sched_ule.c:1945
1945                    cpuid = PCPU_GET(cpuid);
(kgdb) info threads
  287 Thread 100444 (PID=10358: kgdb)  sched_switch (td=0xfffff8004ca59920, newtd=<value optimized out>, flags=<value optimized out>)
    at /usr/src/sys/kern/sched_ule.c:1945
[...]
  273 Thread 100460 (PID=1158: mysqld)  sched_switch (td=0xfffff8003b8a6920, newtd=<value optimized out>, flags=<value optimized out>)
    at /usr/src/sys/kern/sched_ule.c:1945
[...]
Current language:  auto; currently minimal

(kgdb) thread 273
[Switching to thread 273 (Thread 100460)]#0  sched_switch (td=0xfffff8003b8a6920, newtd=<value optimized out>, flags=<value optimized out>)
    at /usr/src/sys/kern/sched_ule.c:1945
1945                    cpuid = PCPU_GET(cpuid);

Let's see what's happening:

(kgdb) bt
#0  sched_switch (td=0xfffff8003b8a6920, newtd=<value optimized out>, flags=<value optimized out>) at /usr/src/sys/kern/sched_ule.c:1945
#1  0xffffffff80931621 in mi_switch (flags=260, newtd=0x0) at /usr/src/sys/kern/kern_synch.c:494
#2  0xffffffff8096e7eb in sleepq_catch_signals (wchan=0xfffff800376922c8, pri=104) at /usr/src/sys/kern/subr_sleepqueue.c:426
#3  0xffffffff8096e69f in sleepq_wait_sig (wchan=0x0, pri=0) at /usr/src/sys/kern/subr_sleepqueue.c:631
#4  0xffffffff8093103d in _sleep (ident=<value optimized out>, lock=<value optimized out>, priority=<value optimized out>,
    wmesg=<value optimized out>, sbt=<value optimized out>, pr=<value optimized out>, flags=<value optimized out>)
    at /usr/src/sys/kern/kern_synch.c:254
#5  0xffffffff808190aa in fifo_open (ap=0xfffffe0095f17678) at /usr/src/sys/fs/fifofs/fifo_vnops.c:191
#6  0xffffffff80e41fc1 in VOP_OPEN_APV (vop=<value optimized out>, a=<value optimized out>) at vnode_if.c:469
#7  0xffffffff809d6b54 in vn_open_vnode (vp=0xfffff8002e88e1d8, fmode=1, cred=0xfffff8004c80a000, td=0xfffff8003b8a6920, fp=0xfffff8003b63a3c0)
    at vnode_if.h:196
#8  0xffffffff809d674c in vn_open_cred (ndp=0xfffffe0095f17880, flagp=0xfffffe0095f1795c, cmode=<value optimized out>,
    vn_open_flags=<value optimized out>, cred=0x0, fp=0xfffff8003b63a3c0) at /usr/src/sys/kern/vfs_vnops.c:256
#9  0xffffffff809cfedf in kern_openat (td=0xfffff8003b8a6920, fd=-100, path=0x7ffffd1a3c80 <Error reading address 0x7ffffd1a3c80: Bad address>,
    pathseg=UIO_USERSPACE, flags=1, mode=<value optimized out>) at /usr/src/sys/kern/vfs_syscalls.c:1096
#10 0xffffffff80d25841 in amd64_syscall (td=0xfffff8003b8a6920, traced=0) at subr_syscall.c:134
#11 0xffffffff80d0aa5b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:391
#12 0x00000008023f492a in ?? ()
Previous frame inner to this frame (corrupt stack?)

(kgdb) frame 5
#5  0xffffffff808190aa in fifo_open (ap=0xfffffe0095f17678) at /usr/src/sys/fs/fifofs/fifo_vnops.c:191
191                             error = msleep(&fip->fi_readers, PIPE_MTX(fpipe),

Reading the code of fs/fifofs/fifo_vnops.c tells us that unless another thread opens the pipe with the intention of writing into it, the thread will sleep eternally.

struct fifoinfo {
        struct pipe *fi_pipe;
        long    fi_readers;
        long    fi_writers;
};

[...]

static int
fifo_open(ap)
        struct vop_open_args /* {
                struct vnode *a_vp;
                int  a_mode;
                struct ucred *a_cred;
                struct thread *a_td;
                struct file *a_fp;
        } */ *ap;
{
[...]
                        error = msleep(&fip->fi_readers, PIPE_MTX(fpipe),
                            PDROP | PCATCH | PSOCK, "fifoor", 0);
[...]
}

This is bad. Since the named pipe file was unlinked, there's no way anybody will open it and write into it.

Also, since the thread is waiting, the MySQL server cannot even be restarted without being SIGKILL'd first. That's really not good.

"Solving" the problem

Re-linking the file

Let's try extracting the vnode from memory, and linking it to a new file, in the kernel. Then, it would be as if it had never been deleted.

I couldn't find any method to achieve this, so I wrote a kernel module:

mysql_fifo_test.c:

#include <sys/types.h>
#include <sys/module.h>
#include <sys/errno.h>
#include <sys/param.h>
#include <sys/kernel.h>
#include <sys/types.h>
#include <sys/kthread.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/namei.h>
#include <sys/fcntl.h>
#include <sys/vnode.h>
#include <sys/buf.h>
#include <sys/capability.h>

int link_vnode_at(struct thread *td, struct vnode *vp, char *path);
void mysql_fifo_test_main_thread(void *p);

static struct mtx main_thread_mutex;
static struct thread *main_thread;

/* Most of this code was taken from kern_linkat (kern/vfs_syscalls.c)*/
int link_vnode_at(struct thread *td, struct vnode *vp, char *path) {
    struct mount *mp;
    struct nameidata nd;
    cap_rights_t rights;
    int error;

    bwillwrite();

again:
    if ((error = vn_start_write(vp, &mp, V_WAIT | PCATCH)) != 0) {
            vrele(vp);
            return (error);
    }
    NDINIT_ATRIGHTS(&nd, CREATE, LOCKPARENT | SAVENAME | AUDITVNODE2,
        UIO_SYSSPACE, path, AT_FDCWD, cap_rights_init(&rights, CAP_LINKAT), main_thread);
    if ((error = namei(&nd)) == 0) {
            if (nd.ni_vp != NULL) {
                    if (nd.ni_dvp == nd.ni_vp)
                            vrele(nd.ni_dvp);
                    else
                            vput(nd.ni_dvp);
                    vrele(nd.ni_vp);
                    error = EEXIST;
            } else if ((error = vn_lock(vp, LK_EXCLUSIVE)) == 0) {
                    /*
                     * Check for cross-device links.  No need to
                     * recheck vp->v_type, since it cannot change
                     * for non-doomed vnode.
                     */
                    if (nd.ni_dvp->v_mount != vp->v_mount)
                            error = EXDEV;
                    if (error == 0)
                            error = VOP_LINK(nd.ni_dvp, vp, &nd.ni_cnd);
                    VOP_UNLOCK(vp, 0);
                    vput(nd.ni_dvp);
            } else {
                    vput(nd.ni_dvp);
                    NDFREE(&nd, NDF_ONLY_PNBUF);
                    vrele(vp);
                    vn_finished_write(mp);
                    goto again;
            }
            NDFREE(&nd, NDF_ONLY_PNBUF);
    }
    vrele(vp);
    vn_finished_write(mp);
    return (error);
}

struct fifoinfo {
        struct pipe *fi_pipe;
        long    fi_readers;
        long    fi_writers;
};

void mysql_fifo_test_main_thread(void *p) {
  mtx_lock(&main_thread_mutex);

  struct vop_open_args *ap;
  ap = (struct vop_open_args *) 0xfffffe0095e9a678;

  struct vnode *vp = ap->a_vp;
  vref(vp);

  int error;
  error = link_vnode_at(main_thread, vp, "/tmp/resurected.fifo");
  printf("MySQL FIFO Test link_vnode_at returned %d.\n", error);

  mtx_unlock(&main_thread_mutex);
  kthread_exit();
}

static void
mysql_fifo_test_load() {
  mtx_init(&main_thread_mutex, "mysql_fifo_test_main_thread_mutex", NULL, MTX_DEF);
  mtx_lock(&main_thread_mutex);

  kthread_add(mysql_fifo_test_main_thread, NULL, NULL, &main_thread, 0, 0, "mysql_fifo_test_main_thread");

  mtx_unlock(&main_thread_mutex);
}

static void
mysql_fifo_test_unload() {
  mtx_lock(&main_thread_mutex);
  mtx_destroy(&main_thread_mutex);  
}

static int
mysql_fifo_test_loader(struct module *m, int what, void *arg)
{
  int err = 0;

  switch (what) {
  case MOD_LOAD:
    mysql_fifo_test_load();
    uprintf("MySQL FIFO Test loaded.\n");
    break;
  case MOD_UNLOAD:
    mysql_fifo_test_unload();
    uprintf("MySQL FIFO Test unloaded.\n");
    break;
  default:
    err = EOPNOTSUPP;
    break;
  }
  return(err);
}

static moduledata_t mysql_fifo_test_mod = {
  "mysql_fifo_test",
  mysql_fifo_test_loader,
  NULL
};

DECLARE_MODULE(mysql_fifo_test, mysql_fifo_test_mod, SI_SUB_KLD, SI_ORDER_ANY);

Makefile:

KMOD    =  mysql_fifo_test
SRCS    =  mysql_fifo_test.c

# https://forums.freebsd.org/threads/vnode_if-h-missing-from-freebsd-9-0-beta1-0-thu-jul-28-16-34-16-utc-2011.27891/#post-230606
SRCS    += vnode_if.h

.include <bsd.kmod.mk>

What this module only does, is that it creates a new thread, that will find a way to link the vnode (adressed directly from memory) to a new file.

Let's see if that works:

# kldload -v ./mysql_fifo_test.ko
MySQL FIFO Test KLD loaded.
Loaded ./mysql_fifo_test.ko, id=7

# kldunload -v ./mysql_fifo_test.ko
Unloading mysql_fifo_test.ko, id=7
MySQL FIFO Test unloaded.
# tail /var/log/messages
[...]
Aug  4 06:25:06 test kernel: MySQL FIFO Test link_vnode_at returned 2.

2 is NOENT. That means the filesystem driver did not accept to create a new link.

Let's read the drivers code and see where that happens.

With ZFS: in cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:

/*
 * Link zp into dl.  Can only fail if zp has been unlinked.
 */
int
zfs_link_create(zfs_dirlock_t *dl, znode_t *zp, dmu_tx_t *tx, int flag)
{
[...]
        if (!(flag & ZRENAMING)) {
                if (zp->z_unlinked) {   /* no new links to unlinked zp */
                        ASSERT(!(flag & (ZNEW | ZEXISTS)));
                        mutex_exit(&zp->z_lock);
                        return (SET_ERROR(ENOENT));
                }
[...]
        }
[...]
}

With UFS: in ufs/ufs/ufs_vnops.c:

/*
 * link vnode call
 */
static int
ufs_link(ap)
        struct vop_link_args /* {
                struct vnode *a_tdvp;
                struct vnode *a_vp;
                struct componentname *a_cnp;
        } */ *ap;
{
[...]
        /*
         * The file may have been removed after namei droped the original
         * lock.
         */
        if (ip->i_effnlink == 0) {
                error = ENOENT;
                goto out;
        }
[...]
}

Both ZFS and UFS prevent linking a file from an unlinked vnode in their source code.

That means this solution does not work.

Opening the fifo manually

If we can't link the vnode again so that someone could write into the pipe, let's be that someone.

Let's call fifo_open and fifo_close manually in the kernel module, using the arguments of the original call to fifo_open.

mysql_fifo_test.c:

#include <sys/types.h>
#include <sys/module.h>
#include <sys/errno.h>
#include <sys/param.h>
#include <sys/kernel.h>
#include <sys/types.h>
#include <sys/kthread.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/namei.h>
#include <sys/fcntl.h>
#include <sys/vnode.h>
#include <sys/buf.h>
#include <sys/capability.h>

int link_vnode_at(struct thread *td, struct vnode *vp, char *path);
void mysql_fifo_test_main_thread(void *p);

static struct mtx main_thread_mutex;
static struct thread *main_thread;

void mysql_fifo_test_main_thread(void *p) {
  struct vop_vector *f;
  vop_open_t *fifo_open;
  vop_close_t *fifo_close;

  mtx_lock(&main_thread_mutex);
  printf("MySQL FIFO Test In da thread.\n");

  /* Get a pointer to the original open args */
  struct vop_open_args *ap;
  ap = (struct vop_open_args *) 0xfffffe0095f17678;

  /* Get pointers to the fifo ops functions */
  f = &fifo_specops;
  fifo_open = f->vop_open;
  fifo_close = f->vop_close;

  /* Call fifo_open */
  struct file some_file;
  struct vop_open_args ap_open;
  ap_open.a_vp = ap->a_vp;
  ap_open.a_mode = FWRITE;
  ap_open.a_cred = NULL; //unused in fifo_open
  ap_open.a_fp = &some_file;
  ap_open.a_td = main_thread;
  fifo_open(&ap_open);

  /* Call fifo_close */
  struct vop_close_args ap_close;
  ap_close.a_vp = ap->a_vp;
  ap_close.a_fflag = FWRITE;
  ap_close.a_cred = NULL; //unused in fifo_close
  ap_close.a_td = main_thread;
  fifo_close(&ap_close);

  mtx_unlock(&main_thread_mutex);
  kthread_exit();
}

static void
mysql_fifo_test_load() {
  mtx_init(&main_thread_mutex, "mysql_fifo_test_main_thread_mutex", NULL, MTX_DEF);
  mtx_lock(&main_thread_mutex);

  kthread_add(mysql_fifo_test_main_thread, NULL, NULL, &main_thread, 0, 0, "mysql_fifo_test_main_thread");

  mtx_unlock(&main_thread_mutex);
}

static void
mysql_fifo_test_unload() {
  mtx_lock(&main_thread_mutex);
  mtx_destroy(&main_thread_mutex);  
}

static int
mysql_fifo_test_loader(struct module *m, int what, void *arg)
{
  int err = 0;

  switch (what) {
  case MOD_LOAD:
    mysql_fifo_test_load();
    uprintf("MySQL FIFO Test loaded.\n");
    break;
  case MOD_UNLOAD:
    mysql_fifo_test_unload();
    uprintf("MySQL FIFO Test unloaded.\n");
    break;
  default:
    err = EOPNOTSUPP;
    break;
  }
  return(err);
}

static moduledata_t mysql_fifo_test_mod = {
  "mysql_fifo_test",
  mysql_fifo_test_loader,
  NULL
};

DECLARE_MODULE(mysql_fifo_test, mysql_fifo_test_mod, SI_SUB_KLD, SI_ORDER_ANY);

Let's fire up the module:

# kldload -v ./mysql_fifo_test.ko
MySQL FIFO Test loaded.
Loaded ./mysql_fifo_test.ko, id=7

# kldunload -v ./mysql_fifo_test.ko
Unloading mysql_fifo_test.ko, id=7
MySQL FIFO Test unloaded.

Nothing crashed. That's good news. Let's look at the MySQL process:

mysql> LOAD DATA INFILE '/mnt/fbd/ghost.fifo' INTO TABLE t;
Query OK, 0 rows affected (1 hour 42 min 17.07 sec)
Records: 0  Deleted: 0  Skipped: 0  Warnings: 0

mysql>

Success! The thread lock is released, and MySQL can continue its normal life.

Conclusion

Moral of the story: don't unlink your named pipe files when someone is waiting for data to be written into them. If you do, you'll need to close them directly in the kernel to unlock your program threads, which is not something you'd like to do on a busy production server.

FreeBSD, POSIX.1e ACLs and inheritance

Not so frequently asked questions and stuff: 

The FreeBSD logo

Introduction

This post does not apply to you if you're using ZFS. In that case, you'll be using NFSv4 ACLs.

First, make sure your UFS volumes are mounted with option acls set.

$ cat /etc/fstab
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/mirror/gm0s1a              /               ufs             rw,acls 1       1

I'm also setting the default umask to 027 in /etc/login.conf, to not have newly created files readable by everyone by default.

File ACLs

Let's create a directory for a website, and give it standard UNIX permissions.

$ mkdir /usr/local/www/truc
$ chown root:wheel /usr/local/www/truc/
$ chmod 2770 /usr/local/www/truc/

$ ll /usr/local/www/
drwxrws---  2 root  wheel  512 Jul 24 09:44 truc/

Let's create two new users, and give them full permissions in the directory:

$ pw useradd jambon
$ pw useradd poulet
$ setfacl -m user:jambon:rwx /usr/local/www/truc/
$ setfacl -m user:poulet:rwx /usr/local/www/truc/

Let's see:

$ getfacl /usr/local/www/truc/
# file: /usr/local/www/truc/
# owner: root
# group: wheel
user::rwx
user:jambon:rwx
user:poulet:rwx
group::rwx
mask::rwx
other::---

So far, so good.

Now, let's connect as the user and create some files.

$ sudo -s -u jambon
% umask 007
% cd /usr/local/www/truc/
% echo "Super script" > index.php

Now let's login as the second user and try to change the file:

$ sudo -s -u poulet
% cd /usr/local/www/truc/
% echo "Better script" > index.php
index.php: Permission denied.

Querying the ACLs confirms that the permissions were not inherited.

$ getfacl index.php
# file: index.php
# owner: jambon
# group: wheel
user::rw-
group::rwx              # effective: r--
mask::r--
other::---

Inheritance

Files and directories actually have *two* ACL tables: file and default. The default one can be used to specify what permissions will be set when creating new content. It can queried using option -d.

$ getfacl -d /usr/local/www/truc/
# file: /usr/local/www/truc/
# owner: root
# group: wheel
user::rwx
group::rwx
mask::rwx
other::---

$ getfacl /usr/local/www/truc/
# file: /usr/local/www/truc/
# owner: root
# group: wheel
user::rwx
user:jambon:rwx
user:poulet:rwx
group::rwx
mask::rwx
other::---

This shows that both the users can write in the directory, but the content they write will only have classic UNIX permissions.

Let's change that.

$ setfacl -d -b /usr/local/www/truc/
$ setfacl -d -m u::rwx,g::rwx,o:: /usr/local/www/truc/
$ setfacl -d -m user:poulet:rwx /usr/local/www/truc/
$ setfacl -d -m user:jambon:rwx /usr/local/www/truc/
$ getfacl -d /usr/local/www/truc/
# file: /usr/local/www/truc/
# owner: root
# group: wheel
user::rwx
user:jambon:rwx
user:poulet:rwx
group::rwx
mask::rwx
other::---
$ sudo -s -u jambon
% umask 007
% cd /usr/local/www/truc/
% echo "Super script" > index.php

$ sudo -s -u poulet
% cd /usr/local/www/truc/
% umask 007
% echo "Better script" > index.php

$ getfacl index.php
# file: index.php
# owner: poulet
# group: wheel
user::rw-
user:jambon:rwx         # effective: rw-
user:poulet:rwx         # effective: rw-
group::rwx              # effective: rw-
mask::rw-
other::---

So far, so good.

Let's try that with a directory.

$ sudo -s -u jambon
% umask 007
% cd /usr/local/www/truc/
% mkdir static && cd static
% echo "I hate Javascript" > content.js

% sudo -s -u poulet
% cd /usr/local/www/truc/static/
% echo "I also hate Javascript" > content.js

$ getfacl static/
# file: static/
# owner: jambon
# group: wheel
user::rwx
user:jambon:rwx
user:poulet:rwx
group::rwx
mask::rwx
other::---

$ getfacl static/content.js
# file: static/content.js
# owner: jambon
# group: wheel
user::rw-
user:jambon:rwx         # effective: rw-
user:poulet:rwx         # effective: rw-
group::rwx              # effective: rw-
mask::rw-
other::---

It works exactly the same.

Since we're hosting a website, let's allow user www to read (but not write) the files:

$ find /usr/local/www/truc/ -exec setfacl -m user:www:rx {} \;
$ find /usr/local/www/truc/ -type d -exec setfacl -d -m user:www:rx {} \;

Repeat the same operation with write permissions if you want the server to be able to write in some directories (and configure it to not execute code there, obviously).

The only limitation of this system is that the default ACLs that apply when new files/directories are created are masked with the standard user/group permissions. You'll need to configure the umask each time (manually in the shell, or in the config files of your ftp/samba/whatever server).

Pages

Subscribe to Front page feed