Postfix – OpenDKIM not signing outgoing email (not authenticated)

I had this weird issue. On CentOS 7 I set up mail server with Postfix, Dovecot, ClamAV, rspamd and OpenDKIM. Everything was working fine except all of my outgoing emails weren’t signed by DKIM.

In mail log this message was shown:

...
Mar 2 14:02:16 vps opendkim[10810]: 6FD942E30BB: not authenticated
Mar 2 14:02:16 vps opendkim[10810]: 6FD942E30BB: no signature data
...

After a lot of trying different configs an options, I googled answer on this forum. Apparently, opendkim thought that user wasn’t authenticated, but in reality, it was. This was due to Postfix configuration. There was missing milter macro. I added {auth_type} macro to milter_mail_macros. Then restarted Opendkim and Postfix and it started to work properly.

Now my postfix configuration (main.cf) for milters looks like this:

milter_protocol = 6
milter_default_action = accept
milter_connect_macros = j {daemon_name} v {if_name} _
milter_mail_macros="i {mail_addr} {client_addr} {client_name} {auth_type} {auth_authen}"
smtpd_milters = inet:localhost:11332 inet:localhost:8891 unix:/var/spool/postfix/clamav/clamav-milter.socket
non_smtpd_milters = $smtpd_milters

And emails are getting signed correctly:

...
Mar 2 14:15:56 vps opendkim[11513]: 8C5213E36D: DKIM-Signature field added (s=mail, d=domain.com)
...

CSF – whitelist user from SMTP_BLOCK

CSF features great option SMTP_BLOCK which block outgoing SMTP for all users except root, exim and mailman. I had a problem with one user which was using MailChimp as mass mailing within their application. Because of SMTP_BLOCK it wasn’t working. Disabling SMTP_BLOCK globally is not recommended, you can white list users for which you would like to allow sending.

Go to your CSF settings and find SMTP_ALLOWUSER. Then add user which should be allowed (users separated with coma). Don’t forget to restart CSF.

Disable OPcache for specific PHP script. Exclude from OPcache.

Sometimes accelerating with opcache can cause some problems with your application scripts. In those cases, when your script shouldn’t be accelerated, you can specify those scripts with opcache’s blacklist which will exclude this files from acceleration. Example bellow is done on CentOS 7.

First, find configuration file for your opcache php extension. You can do something like this:

[root@meow php.d]# php -i | grep opcache | grep ini
Additional .ini files parsed => /etc/php.d/10-opcache.ini,

Open 10-opcache.ini and you should see something like bellow. Path to opcache’s blacklist file.

; The location of the OPcache blacklist file (wildcards allowed).
; Each OPcache blacklist file is a text file that holds the names of files
; that should not be accelerated.
opcache.blacklist_filename=/etc/php.d/opcache*.blacklist

Close 10-opcache.ini and open file named opcache-default.blacklist which should be in same directory. If not, create one. This file will contain a list of php scripts which should be ignored by opcache. 

[root@meow php.d]# cat opcache-default.blacklist
; The blacklist file is a text file that holds the names of files
; that should not be accelerated. The file format is to add each filename
/path/to/ignored/script/ignoreThis.php
/path/to/another/ignored/script/ignoreThisToo.php
...

cPanel email problem – (13): Permission denied: failed to chdir to /home/username

I had this weird issue on one of our production cpanel servers where user’s email stopped working without any reason. Only error that was available was:

T=dovecot_virtual_delivery defer (13): Permission denied: failed to chdir to /home/username

From time to time users document root permissions were set to user nobody and execution privileges were removed. Because of this, email wasn’t working and I couldn’t find out why.

After a lot of headache I googled across this thread. Permissions were altered by cPanel’s File Protect. Somehow file protect recognized this accounts permissions weren’t right. After checking in users account, there was sub-domain created for which document root was set to “/”. This is not valid document root, and because of this, file protect altered users permissions.

I changed document root for this sub-domain and problem was solved. You should also correct user’s permissions on document root after fixing issue with file protect:

chmod +x /home/username
chown username:username /home/username

You should make sure that user accounts permissions are absolutely correct.

Hope this saves some sleep 🙂

Get list of mass/multi domain redirects with CURL

I had large list of domains for which I had to check to which location are they pointing/redirecting. Curl is best option for this kind of work. To save some time, I wrote this simple one liner which will do that for you.

First, create txt file which will contain list of all domains that you want to check. For this example I will create domains.txt. 

Then, run this command – replace file name with yours.

> $ for i in `cat domains.txt`; do echo -n "$i -> "; curl -I -s -L -o /dev/null -w %{url_effective} -o /dev/null $i; echo "\t"; done

This will give you domain name with location to which it’s redirecting:

domain1.com -> https://www.domain1.com/sl 
domain1.de -> https://www.domain1.com/de 
domain2.si -> http://domain2.si/si 
example.com -> https://www.example.com/
lalala.es -> https://www. lalala.es/spain 
bash.com -> https://www.bash.com/i/love
...

Password protect Netdata with NGINX / Permission denied while connecting to upstream error

Netdata is great free tool for generating server statistics. By default it’s open for entire world on port 19999 – http://servername.com:19999. It is not a good idea to leave this open so everyone can your system statistics.

One way to limit access from where it is accessible is by editing netdata.conf and specify IPS in “allow connections from” variable.

[web]
allow connections from = ip's that are allowed to access>

There is no option to password protect it. This can be done with NGINX. You can create reverse proxy, so that nginx will serve content from netdata application. To make netdata accessible on subfolder of your hostname, eg. http://my.hostname.com/netdata, then create nginx configuration like bellow.

First generate password file for nginx:

htpasswd -c /etc/nginx/.htpasswd "username"

Then create or edit existing nginx configuration to something like this:

upstream netdata {
        server 127.0.0.1:19999;
        keepalive 64;
}

server {
     listen 443 ssl http2;
     server_name my.hostname.com;
     location = /netdata {
         return 301 /netdata/;
    }

    location ~ /netdata/(?.*) {

        auth_basic "Restricted Content";
        auth_basic_user_file /etc/nginx/.htpasswd;

        proxy_redirect off;
        proxy_set_header Host $host;   

        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_http_version 1.1;
        proxy_pass_request_headers on;
        proxy_set_header Connection "keep-alive";
        proxy_store off;
        proxy_pass http://netdata/$ndpath$is_args$args;

        gzip on;
        gzip_proxied any;
        gzip_types *;
   }

    error_log /var/log/nginx/error.log;
    access_log off;
    ....

Also, don’t forget to edit netdata.conf and change some variables. Make netdata accessible only from localhost (nginx):

[web]
bind to = 127.0.0.1
allow connections from = localhost
allow dashboard from = localhost

You should also allow connection to port 19999 only to local traffic (localhost).

Restart nginx and netdata, then try to access like: http(s)://my.hostname.com/netdata.

If you’re getting error like bellow in your nginx error log, than chances are that SELinux is active. Disable selinux or execute this command “setsebool -P httpd_can_network_connect true”.

[crit] 8411#0: *1 connect() to 127.0.0.1:19999 failed (13: Permission denied) while connecting to upstream, client: 8.8.8.8, server: my.hostname.com, request: "GET /netdata/ HTTP/1.1", upstream: "http://127.0.0.1:19999/", host: "my.hostname.com"

 

 

 

NGINX: rewrite non-www to www for multi domain virtual hosts

If you have NGINX virtual host that has a multi different domains pointing to same document root (multi server_name), and you want to automatically redirect non-www to www, than bellow is simple solution. I also wanted to redirect to https with www.

If you don’t need https redirection, than you can simply use variable $scheme instead of “https:”. 

if ( $host !~ ^www\. ) {
            return 302 https://www.$host$request_uri;
}

So virtual host should look something like this:

server {
      listen 1.1.1.1:80;
      server_name domain1.com www.domain1.com domain2.com www.domain2.com;

      if ( $host !~ ^www\. ) {
           return 302 https://www.$host$request_uri;
      }
      return 302 https://$host$request_uri;
}

You should also make this redirect in your https server definition. otherwise request for https://domain1.com won’t redirect to www.

server {
      listen 1.1.1.1:443;
      server_name domain1.com www.domain1.com domain2.com www.domain2.com;
      if ( $host !~ ^www\. ) {
              return 302 https://www.$host$request_uri;
      }

      ssl on;
      ssl_certificate /etc/letsencrypt/live/domains.com/fullchain.pem;
      ssl_certificate_key /etc/letsencrypt/live/domains.com/privkey.pem;

      .... //other nginx configuration ....
}

cPanel’s Awstats: There are no domains which have awstats stats to display

One client had issue with Awstats statistics. They stoped working. When he tried to add new domain and check Awstats in cPanel, this message was shown:

There are no domains which have awstats stats to display

Awstats were configured correctly, cron was executed. I then discovered that there were no active access log in /usr/local/apache/domlogs/domainname.com. So I tried to tail this log while visit web site, no access log was generating.

A while ago I enabled cPanels option “Piped Log Configuration” which was suggested to enable to speed up cPanel control panel experience. When this was disabled, access log per domain started to working again.

 

Magento: PHP Fatal error – Allowed memory size exhausted when bin/magento module:status

This was strange one. When calling simple magento command with PHP CLI I was getting error that allowed memory size was exhausted.

[root@machine ~]# php bin/magento module:status
[root@machine ~]# PHP Fatal error: Allowed memory size of 2097152 bytes exhausted (tried to allocate 32768 bytes) in /path/to/wwww/magentoshot.com/vendor/symfony/console/Application.php on line 951

I checked php.ini and it was set like this:

memory_limit = 2048M

I checked if there are different values for CLI version. It were the same.

Solution was simple. Change your php.ini value for memory_limit and define it in gigabytes instead in megabytes.

memory_limit = 2G

I restarted Apache and it started to work.

Find common/identical lines within two files without DIFF

Here is really simple trick how to search for strings that are the same within two different files.

For presenting purposes I created two files with some text in it. Some text is the same, some not.

File 1:

> $ cat file1.txt 
test1
test2
test3
test4
test5

File2:

> $ cat file2.txt 
lala1
lala2
test3
test4
lala4
lala6

Here is how to find strings that are the same within both files:

> $ cat file1.txt file2.txt | sort | uniq -c | grep "2 " 
2 test3
2 test4

So, strings test3 and test4 occurring in both files.

© 2024 geegkytuts.net
Hosted by SIEL


About author