Skip to main content

Users see PHP errors. Swap, /var/tmp, and /tmp all full

Comments

7 comments

  • cPRex Jurassic Moderator
    Hey there! I wouldn't expect the database change to be related to the issue, but you never know until you verify what is actually taking up space. I would look at /tmp on the system and see if you can find what is using all the space. It could be a bad script that isn't clearing up temporary files, or something to do with databases, but the only way to know for sure is to take a look with SSH.
    0
  • T1531
    Hey there! I wouldn't expect the database change to be related to the issue, but you never know until you verify what is actually taking up space. I would look at /tmp on the system and see if you can find what is using all the space. It could be a bad script that isn't clearing up temporary files, or something to do with databases, but the only way to know for sure is to take a look with SSH.

    It happened again, so I checked before rebooting this time. Inside of /tmp there was a nearly 4GB file from MariaDB: #sql-temptable-53e-1fbf2-86d.MAD. I checked under MySQL processes and there was a process running for several hours. Maybe it was stuck? It looked like a regular SELECT with a few joins. Nothing special. Meanwhile, I received an email that my automatic backup (through cPanel) failed. However, this appears to have been a symptom, not a cause. Later, after rebooting to fix everything, I ran the backup manually and it worked. The issue has happened before in a time that's separate from the backing up anyway, so for these two reasons, I don't think it's caused by the backup. Before I rebooted the server, I tried killing the SQL process, but it seemed to get stuck on trying to delete the temporary table while killing it. Then I tried restarting just SQL through Restart Services > SQL Server (MySQL). This didn't work, so I tried a Graceful Server Reboot, which also didn't work. I assume it was stuck waiting for the SQL process. At this point, the Graceful Server Reboot brought cPanel and the entire site down, but it was stuck doing nothing, so I did a hard reboot of the actual server. When everything came back online, the temporary table file was gone and everything was working as expected. Rebooting always fixes the issue (until it happens again). As I mentioned, the only thing I did before this started happening was upgrade MariaDB. This and the temporary table from MariaDB make me think it could be related. The only other issue I've noticed since the update was that MySQL Workbench doesn't allow a connection remotely with this error: "SSL connection error: SSL is required but the server doesn't support it." I haven't started troubleshooting that one yet though.
    0
  • ffeingol
    You should be able to go into phpmyadmin and look at the MySQL processes that are running. That will let you know what account it is and at least some of the SQL causing the temp table. That should help you tack things down.
    0
  • cPRex Jurassic Moderator
    There's also the MySQL error log too, which is typically /var/log/mysqld.log
    0
  • T1531
    You should be able to go into phpmyadmin and look at the MySQL processes that are running. That will let you know what account it is and at least some of the SQL causing the temp table. That should help you tack things down.

    Yes, I did look at the processes as I mentioned above. However, the SELECT query that appeared to be stuck didn't seem to be anything unusual. Though, if/when this happens again I'll check processes again and if it's the same query stuck there, I'll start assuming it could actually be related.
    There's also the MySQL error log too, which is typically /var/log/mysqld.log

    For MariaDB, I found the error log file here: /var/lib/mysql/[hostname].err Starting from the bottom, there are tons of "disk is full writing" errors about /tmp, which makes sense. Scrolling up to right before those errors, there's a single: [QUOTE][Warning] Aborted connection 130040 to db: [database] user: [user] host: 'localhost' (Got an error writing communication packets)
    Then tons of these: [QUOTE][Warning] Aborted connection 127792 to db: 'unconnected' user: 'unauthenticated' host: '[IP address]' (This connection closed normally without authentication).
    The IP address I removed above is one I don't recognize from a different country. A couple of hours before, same error with a different IP (from a different country). Is this something malicious? Edit: Also mixed into the "unconnected user" errors I see this similar one: [QUOTE][Warning] Access denied for user 'Cpanel::MysqlUtils::Unprivileged'@'localhost' (using password: NO)
    0
  • ffeingol
    Next time it happens, click the "T" to the right of "SQL query" in the phpMyAdmin, processes, status. That will show you the full query that running. My guess would be that there is some sort of aggregate (sum, average, etc) that's causing the temp table to get created. SQL queries don't get stuck per-se. That would indicate that there is an incorrect join etc. and they are just not getting the results the expect.
    0
  • T1531
    Next time it happens, click the "T" to the right of "SQL query" in the phpMyAdmin, processes, status. That will show you the full query that running. My guess would be that there is some sort of aggregate (sum, average, etc) that's causing the temp table to get created. SQL queries don't get stuck per-se. That would indicate that there is an incorrect join etc. and they are just not getting the results the expect.

    Fortunately, I did save the query from earlier and you're right, it was the cause. I ran the query myself using phpMyAdmin and the temporary table file was instantly created (same exact size, nearly 4GB) and everything locked up. Apparently, it wasn't a harmless little SELECT after all. I updated the software that this query is coming from, so hopefully this doesn't keep happening. Otherwise, I'll go to their support.
    0

Please sign in to leave a comment.