225

Is it possible to tune a kernel parameter to allow a userland program to bind to port 80 and 443?

The reason I ask is I think its foolish to allow a privileged process to open a socket and listen. Anything that opens a socket and listens is high risk, and high risk applications should not be running as root.

I'd much rather try to figure out what unprivileged process is listening on port 80 rather than trying to remove malware that burrowed in with root privileges.

jww
  • 11,918
  • 44
  • 119
  • 208

7 Answers7

293

I'm not sure what the other answers and comments here are referring to. This is possible rather easily. There are two options, both which allow access to low-numbered ports without having to elevate the process to root:

Option 1: Use CAP_NET_BIND_SERVICE to grant low-numbered port access to a process:

With this you can grant permanent access to a specific binary to bind to low-numbered ports via the setcap command:

sudo setcap CAP_NET_BIND_SERVICE=+eip /path/to/binary

For more details on the e/i/p part, see cap_from_text.

After doing this, /path/to/binary will be able to bind to low-numbered ports. Note that you must use setcap on the binary itself rather than a symlink.

Option 2: Use authbind to grant one-time access, with finer user/group/port control:

The authbind (man page) tool exists precisely for this.

  1. Install authbind using your favorite package manager.

  2. Configure it to grant access to the relevant ports, e.g. to allow 80 and 443 from all users and groups:

    sudo touch /etc/authbind/byport/80
    sudo touch /etc/authbind/byport/443
    sudo chmod 777 /etc/authbind/byport/80
    sudo chmod 777 /etc/authbind/byport/443
    
  3. Now execute your command via authbind (optionally specifying --deep or other arguments, see the man page):

    authbind --deep /path/to/binary command line args
    

    E.g.

    authbind --deep java -jar SomeServer.jar
    

There are upsides and downsides to both of the above. Option 1 grants trust to the binary but provides no control over per-port access. Option 2 grants trust to the user/group and provides control over per-port access but older versions supported only IPv4 (since I originally wrote this, newer versions with IPv6 support were released).

Jason C
  • 10,467
  • 9
  • 41
  • 64
  • 3
    Does it really need `rwx` permission? – matt Apr 23 '16 at 17:42
  • 1
    To revert the operation in Option 1, would you run the command again using `-p` intead of `+eip`? – artis3n Aug 27 '16 at 15:49
  • @eugene1832 That should be sufficient (and you could also do `-e` to e.g. disable the capability but still leave it in the permitted set). See https://www.kernel.org/pub/linux/libs/security/linux-privs/kernel-2.2/capfaq-0.2.txt question #2 for a bit more info about how effective and permitted capabilities are combined. You'd have to make the call based on your situation. – Jason C Aug 27 '16 at 15:55
  • 17
    Beware that, with setcap, if you overwrite the executable you grant privileges to (ex: do a rebuild) then it loses its privileged port status and you have to give it privileges again :| – rogerdpack Oct 27 '16 at 22:25
  • 2
    Something that I had to fiddle with; I was trying to run a sysv service, that runs a ruby executable that uses ruby. You need to give the `setcap` permission on the *version- specific ruby executable*, e.g. `/usr/bin/ruby1.9.1` – Christian Rondeau Jan 25 '17 at 19:31
  • 10
    I have my doubts that `chmod`ing to 777 the `byport` files is the best idea. I've seen giving permisions ranging from `500` to `744`. I would stuck to the most restrictive one that works for you. – Pere May 09 '17 at 09:09
  • 2
    for `setcap` there is no need to give inherited (i) permissions, and you probably should not. If you are writing an app, then it is better to be capability aware, then there is no need to set effective (e). – ctrl-alt-delor Nov 17 '17 at 15:28
  • 1
    For `authbind`, if it work at user level and port, then if you create a new user/group for the app, and make the app suid/sgid then you can synthesise application and port level control. – ctrl-alt-delor Nov 17 '17 at 15:30
  • Is not work for virtualbox binary. – e-info128 Jun 06 '19 at 16:31
  • @ctrl-alt-delor After some tests, I saw that the effective (e) is required, the inherited (i) is not – chmike Mar 08 '20 at 12:44
  • 2
    @chmike `e` is not needed, if the program that uses capabilities, is capability aware. That is it is written to use them, and so copies a permitted capability to effective (when needed). `e` is needed for all legacy programs. – ctrl-alt-delor Mar 08 '20 at 13:51
  • Use `noob`'s answer that uses `iptables` to redirect port traffic. Simplest solution by far, and easy to undo if necessary. – Andrew Koster Mar 19 '20 at 02:42
  • 5
    IMPO you really shouldn't be giving access to "all users and groups". Instead, you should pick a trusted user that needs to run this and then chown the /etc/authbind/byport/80 and 443 files by that user and chmod them so that they are executable by that user and no-one else. Otherwise you're increasing your security risk, not decreasing it. – deltaray Feb 18 '21 at 14:39
  • 1
    @deltaray Normally, you'd probably want to create a trusted *group* then add said user to that group. That simplifies management a lot, and also makes it easier to quickly revoke a user's permissions, especially if it's on multiple ports. It also simplifies application deployment if this is part of an install step. – Jason C Feb 18 '21 at 14:44
  • @AndrewKoster `nginx` is another good option along those lines, if you want to take that style of approach. – Jason C May 10 '21 at 21:59
  • @deltaraypart of the point of preventing non root processes binding to low ports is so that a malicious user level process can't race the legit process at boot time and take over the port. – eglasius Jan 18 '23 at 13:15
51

I have a rather different approach. I wanted to use port 80 for a node.js server. I was unable to do it since Node.js was installed for a non-sudo user. I tried to use symlinks, but it didn't work for me.

Then I got to know that I can forward connections from one port to another port. So I started the server on port 3000 and set up a port forward from port 80 to port 3000.

This link provides the actual commands which can be used to do this. Here're the commands -

localhost/loopback

sudo iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 3000

external

sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 3000

I have used the second command and it worked for me. So I think this is a middle ground for not allowing user-process to access the lower ports directly, but giving them access using port-forwarding.

noob
  • 1,247
  • 4
  • 16
  • 25
  • `nginx` is a great option along these lines, too; easy to set up and very powerful. – Jason C May 10 '21 at 21:58
  • @JasonC I agree! It's more declarative and better supported and AFAIK nginx uses port-forward too. – noob May 11 '21 at 08:14
  • Quick question: I ran the second command a while back, and I'm looking to remove this, as I have written an automatic forwarding server/loadbalancer, and would like to deploy it under port 80. – J-Cake May 25 '21 at 09:48
  • 2
    Keep in mind you may need to bind to `0.0.0.0` instead of `127.0.0.1` for external traffic. – Soheil Sep 21 '21 at 02:52
  • As soon as I start firewalld service on my machine, the port forwarding stop working. Any suggestion on what might be happening? – Jay Joshi Oct 21 '22 at 00:20
  • I ran something analogous to `sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 3000` and found that https requests issued from my Docker containers stopped working, I guess because they were going via the host machine? – Ben Millwood Aug 23 '23 at 15:34
40

Dale Hagglund is spot on. So I'm just going to say the same thing but in a different way, with some specifics and examples. ☺

The right thing to do in the Unix and Linux worlds is:

  • to have a small, simple, easily auditable, program that runs as the superuser and binds the listening socket;
  • to have another small, simple, easily auditable, program that drops privileges, spawned by the first program;
  • to have the meat of the service, in a separate third program, run under a non-superuser account and chain loaded by the second program, expecting to simply inherit an open file descriptor for the socket.

You have the wrong idea of where the high risk is. The high risk is in reading from the network and acting upon what is read not in the simple acts of opening a socket, binding it to a port, and calling listen(). It's the part of a service that does the actual communication that is the high risk. The parts that open, bind(), and listen(), and even (to an extent) the part that accepts(), are not the high risk and can be run under the aegis of the superuser. They don't use and act upon (with the exception of source IP addresses in the accept() case) data that are under the control of untrusted strangers over the network.

There are many ways of doing this.

inetd

As Dale Hagglund says, the old "network superserver" inetd does this. The account under which the service process is run is one of the columns in inetd.conf. It doesn't separate the listening part and the dropping privileges part into two separate programs, small and easily auditable, but it does separate off the main service code into a separate program, exec()ed in a service process that it spawns with an open file descriptor for the socket.

The difficulty of auditing isn't that much of a problem, as one only has to audit the one program. inetd's major problem is not auditing so much but is rather that it doesn't provide simple fine-grained runtime service control, compared to more recent tools.

UCSPI-TCP and daemontools

Daniel J. Bernstein's UCSPI-TCP and daemontools packages were designed to do this in conjunction. One can alternatively use Bruce Guenter's largely equivalent daemontools-encore toolset.

The program to open the socket file descriptor and bind to the privileged local port is tcpserver, from UCSPI-TCP. It does both the listen() and the accept().

tcpserver then spawns either a service program that drops root privileges itself (because the protocol being served involves starting out as the superuser and then "logging on", as is the case with, for example, an FTP or an SSH daemon) or setuidgid which is a self-contained small and easily auditable program that solely drops privileges and then chain loads to the service program proper (no part of which thus ever runs with superuser privileges, as is the case with, say, qmail-smtpd).

A service run script would thus be for example (this one for dummyidentd for providing null IDENT service):

#!/bin/sh -e
exec 2>&1
exec \
tcpserver 0 113 \
setuidgid nobody \
dummyidentd.pl

nosh

My nosh package is designed to do this. It has a small setuidgid utility, just like the others. One slight difference is that it's usable with systemd-style "LISTEN_FDS" services as well as with UCSPI-TCP services, so the traditional tcpserver program is replaced by two separate programs: tcp-socket-listen and tcp-socket-accept.

Again, single-purpose utilities spawn and chain load one another. One interesting quirk of the design is that one can drop superuser privileges after listen() but before even accept(). Here's a run script for qmail-smtpd that indeed does exactly that:

#!/bin/nosh
fdmove -c 2 1
clearenv --keep-path --keep-locale
envdir env/
softlimit -m 70000000
tcp-socket-listen --combine4and6 --backlog 2 ::0 smtp
setuidgid qmaild
sh -c 'exec \
tcp-socket-accept -v -l "${LOCAL:-0}" -c "${MAXSMTPD:-1}" \
ucspi-socket-rules-check \
qmail-smtpd \
'

The programs that run under the aegis of the superuser are the small service-agnostic chain-loading tools fdmove, clearenv, envdir, softlimit, tcp-socket-listen, and setuidgid. By the point that sh is started, the socket is open and bound to the smtp port, and the process no longer has superuser privileges.

s6, s6-networking, and execline

Laurent Bercot's s6 and s6-networking packages were designed to do this in conjunction. The commands are structurally very similar to those of daemontools and UCSPI-TCP.

run scripts would be much the same, except for the substitution of s6-tcpserver for tcpserver and s6-setuidgid for setuidgid. However, one might also choose to make use of M. Bercot's execline toolset at the same time.

Here's an example of an FTP service, lightly modified from Wayne Marshall's original, that uses execline, s6, s6-networking, and the FTP server program from publicfile:

#!/command/execlineb -PW
multisubstitute {
    define CONLIMIT 41
    define FTP_ARCHIVE "/var/public/ftp"
}
fdmove -c 2 1
s6-envuidgid pubftp 
s6-softlimit -o25 -d250000 
s6-tcpserver -vDRH -l0 -b50 -c ${CONLIMIT} -B '220 Features: a p .' 0 21 
ftpd ${FTP_ARCHIVE}

ipsvd

Gerrit Pape's ipsvd is another toolset that runs along the same lines as ucspi-tcp and s6-networking. The tools are chpst and tcpsvd this time, but they do the same thing, and the high risk code that does the reading, processing, and writing of things sent over the network by untrusted clients is still in a separate program.

Here's M. Pape's example of running fnord in a run script:

#!/bin/sh
exec 2>&1
cd /public/10.0.5.4
exec \
chpst -m300000 -Uwwwuser \
tcpsvd -v 10.0.5.4 443 sslio -v -unobody -//etc/fnord/jail -C./cert.pem \
fnord

systemd

systemd, the new service supervision and init system that can be found in some Linux distributions, is intended to do what inetd can do. However, it doesn't use a suite of small self-contained programs. One has to audit systemd in its entirety, unfortunately.

With systemd one creates configuration files to define a socket that systemd listens on, and a service that systemd starts. The service "unit" file has settings that allow one a great deal of control over the service process, including what user it runs as.

With that user set to be a non-superuser, systemd does all of the work of opening the socket, binding it to a port, and calling listen() (and, if required, accept()) in process #1 as the superuser, and the service process that it spawns runs without superuser privileges.

JdeBP
  • 26,613
  • 1
  • 72
  • 103
38

Simplest solution : remove all privileged ports on linux

Works on ubuntu/debian :

#save configuration permanently
echo 'net.ipv4.ip_unprivileged_port_start=0' > /etc/sysctl.d/50-unprivileged-ports.conf
#apply conf
sysctl --system

(works well for VirtualBox with non-root account)

Now, be carefull about security because all users can bind all ports !

soleuu
  • 489
  • 4
  • 3
  • 2
    That's clever. One small nit: the configuration opens 80 and 443, but it also opens all the other ports. Relaxing permissions on the other ports may not be desired. – jww Sep 13 '19 at 13:36
  • Nice solution. I have used it for IPv6 and it is working perfectly. Here is what I've done: https://docs.google.com/document/d/e/2PACX-1vQODnPB6pUCjIcNHgyIJTYmuid4YYxpjvfWfgGNOVJBEfQTJ-It1mOJC-BXUbaRBKiG-IT1BkU4_HQq/pub – Fernando Apr 23 '20 at 11:32
  • seems to be the simplest solution, however is there a way to only open 80 and 443 to a certain group? – mekb Aug 26 '21 at 02:45
  • allowing only 80 and 443 is not possible with this method. you can change the value to 80 but it will allow port range 80-1024 for non root users. – soleuu Sep 16 '21 at 13:39
  • 1
    > be careful about security because all users can bind all ports Can someone elaborate why this is bad? – Nate-Wilkins Mar 20 '22 at 16:01
6

Your instincts are entirely correct: it's a bad idea to have a large complex program run as root, because their complexity makes them hard to trust.

But, it's also a bad idea to allow regular users to bind to privileged ports, because such ports usually represent important system services.

The standard approach to resolving this apparent contradiction is privilege separation. The basic idea is to separate your program into two (or more) parts, each of which does a well-defined piece of the overall application, and which communicate by simple limited interfaces.

In the example you give, you want to separate your program into two pieces. One that runs as root and opens and binds to the privileged socket, and then hands it off somehow to the other part, which runs as a regular user.

These two main ways to achieve this separation.

  1. A single program that starts as root. The very first thing it does is create the necessary socket, in as simple and limited a way as possible. Then, it drops privileges, that is, it converts itself into a regular user mode process, and does all other work. Dropping privileges correctly is tricky, so please take the time to study the right way to do it.

  2. A pair of programs that communicate over a socket pair created by a parent process. A non-privileged driver program receives initial arguments and perhaps does some basic argument validation. It creates pair of connected sockets via socketpair(), and then forks and execs two other programs that will do the real work, and communicate via the socket pair. One of these is privileged and will create the server socket, and any other privileged operations, and the other will do the more complex and therefore less trustworthy application execution.

Nathan Tuggy
  • 126
  • 1
  • 1
  • 12
  • 1
    What your proposing isn't considered best practice. You might look at inetd, which can listen on a privileged socket and then hand that socket of to an unprivileged program. – Dale Hagglund Feb 02 '14 at 09:59
  • Probably good advice if you are designing the program. If you just want to run a program that accepts a port as an argument, what would you do then? – jontejj Jan 21 '21 at 21:34
  • @jontejj Just to make sure I'm clear, you're talking about a program that accepts a port number to listen on via the command line? I'd start by seeing if there was any way to use a non privileged port, to avoid needing root privs. There might be a way to use linux capability tools to assign just the right to open privilege ports when you run the program. – Dale Hagglund Jan 22 '21 at 03:55
  • auth-bind seem to be the way to go for one-offs? – jontejj Jan 22 '21 at 05:34
  • @jontejj I'm not familiar with it so I can't say. – Dale Hagglund Jan 22 '21 at 07:27
4

If you are running systemd and linux, then you can simply add to the server unit file:

# /etc/systemd/system/http_server.service
# ...
[Service]
# ...
AmbientCapabilities = CAP_NET_BIND_SERVICE

And, if, in addition, you want your web server to never gain additional capabilities, you may also add:

CapabilityBoundingSet = CAP_NET_BIND_SERVICE

Also see

for a description of those systemd service unit file configuration options, which define the execution environment of spawned processes.

Justine Krejcha
  • 2,207
  • 2
  • 16
  • 27
thx1111
  • 41
  • 1
2

What is the simplest thing that could possibly work?

A reverse proxy. Nginx is simpler than iptables (for me anyways). Nginx also offers "ssl termination".

sudo apt install nginx
sudo service nginx start
# Verify it's working
curl http://localhost
# make certs
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/cert.key -out /etc/nginx/cert.crt
sudo nano /etc/nginx/conf.d/devserver.conf 

add this content

server {
    listen 80;
    return 301 https://$host$request_uri;
}

server {

    listen 443;
    server_name www.example.com;

    ssl_certificate           /etc/nginx/cert.crt;
    ssl_certificate_key       /etc/nginx/cert.key;

    ssl on;

    location / {

      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;

      proxy_pass          http://localhost:8080;
    }
}

The restart the server:

sudo service nginx restart

Configure DNS: A record for www.example.com -> 127.0.0.1

# Test it out:
curl --insecure --verbose https://www.example.com
Michael Cole
  • 717
  • 6
  • 8