Our forum migration to Discourse is underway and scheduled to last through June 21. During the migration, the forums will be read-only, except for a single temporary forum (contents of which will not be getting transferred). Read our announcement post for more information about the forum migration.
Hey folks, there is a new Podcast category for forums https://forums.plex.tv/categories/podcasts
If you have not already, we suggest setting your Plex username to something else rather than email which is displayed on your posts in forum. You can change the username at https://app.plex.tv/desktop#!/account
Welcome to our forums! Please take a few moments to read through our Community Guidelines (also conveniently linked in the header at the top of each page). There, you'll find guidelines on conduct, tips on getting the help you may be searching for, and more!

Plex server web interface constantly hanging with tuner configured

WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
edited February 27 in Linux

I recently bought an hdhomerun for my plex server and it seemed to work great for about 24hrs. After that the server keeps crashing. If you check the service it still says its running but you cannot load the web interface.

I have removed the Tuner to see if it will re stabilize

Best Answer

  • sa2000sa2000 Members, Plex Pass, Plex Ninja, Plex Team Member Posts: 31,496 Plex Team Member
    edited February 1 Accepted Answer

    @WatchTowerPlex said:
    @sa2000 Does this version Plex Media Server 1.11.1.4753 cover this issue?

    It should be fixed. There are deadlock/lockout fixes still to get released but the issue observed in your logs and process dump should be fixed in this release

    http://forums.plex.tv/discussion/comment/1606344/#Comment_1606344

    • (Transcoder) Playback could cause deadlocks in certain situations (#8079)
«13456

Answers

  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited January 5

    Server Version 1.11.0.4666
    Operating System: CentOS Linux 7 (Core)
    Kernel: Linux 3.10.0-693.11.6.el7.x86_64

  • ChuckPAChuckPA Members, Plex Pass, Plex Ninja, Plex Team Member Posts: 22,850 Plex Team Member

    The server isn't crashing. Something is shutting it down.
    there is something attempting to provide OSCP messages to PMS.

    Jan 05, 2018 10:01:42.670 [0x7f82d3fff700] DEBUG - Versions: garbage collecting
    Jan 05, 2018 10:01:42.682 [0x7f82d3fff700] DEBUG - Versions: garbage collected in 0.0 seconds
    Jan 05, 2018 10:05:37.856 [0x7f82c2ffd700] DEBUG - EPG[onconnect]: Purging 0 airings which completed in the past.
    Jan 05, 2018 10:05:47.499 [0x7f82c2ffd700] DEBUG - HTTP requesting GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:05:47.549 [0x7f82c2ffd700] DEBUG - HTTP 200 response from GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:05:47.550 [0x7f82c2ffd700] ERROR - OCSP response error: unauthorized
    Jan 05, 2018 10:10:37.858 [0x7f82c2ffd700] DEBUG - EPG[onconnect]: Purging 0 airings which completed in the past.
    Jan 05, 2018 10:10:47.551 [0x7f82c2ffd700] DEBUG - HTTP requesting GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:10:47.599 [0x7f82c2ffd700] DEBUG - HTTP 200 response from GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:10:47.600 [0x7f82c2ffd700] ERROR - OCSP response error: unauthorized
    Jan 05, 2018 10:15:37.860 [0x7f82c2ffd700] DEBUG - EPG[onconnect]: Purging 0 airings which completed in the past.
    Jan 05, 2018 10:15:47.601 [0x7f82c2ffd700] DEBUG - HTTP requesting GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:15:48.178 [0x7f82c2ffd700] DEBUG - HTTP 200 response from GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:15:48.179 [0x7f82c2ffd700] ERROR - OCSP response error: unauthorized
    Jan 05, 2018 10:20:28.876 [0x7f82a4bfe700] DEBUG - Sync: uploadStatus
    Jan 05, 2018 10:20:37.862 [0x7f82dc7ff700] DEBUG - EPG[onconnect]: Purging 0 airings which completed in the past.
    Jan 05, 2018 10:20:48.180 [0x7f82dc7ff700] DEBUG - HTTP requesting GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:20:48.335 [0x7f82dc7ff700] DEBUG - HTTP 200 response from GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:20:48.336 [0x7f82dc7ff700] ERROR - OCSP response error: unauthorized
    Jan 05, 2018 10:25:37.864 [0x7f82dc7ff700] DEBUG - EPG[onconnect]: Purging 0 airings which completed in the past.
    Jan 05, 2018 10:25:48.336 [0x7f82dc7ff700] DEBUG - HTTP requesting GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:25:48.498 [0x7f82dc7ff700] DEBUG - HTTP 200 response from GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:25:48.499 [0x7f82dc7ff700] ERROR - OCSP response error: unauthorized
    Jan 05, 2018 10:26:46.790 [0x7f8317a35840] DEBUG - Shutting down with signal 15
    Jan 05, 2018 10:26:46.790 [0x7f8317a35840] DEBUG - Ordered to stop server.
    

    Please DISABLE Verbose logging until requested

    Please search before posting

    Primary support: Linux, Synology, and QNAP

    Please remember to report back. This benefits others.

    Useful links

     Installation and Basic Setup |  Media Preparation (How to name your media files)  |  Linux Permissions 

     Handling TV Specials | Handling Movie extras  |  Nas Compatibility

     Reporting Plex Server issues | Plex Media Server FAQ | Linux Tips

     

    Other useful guides: Local Subtitles | The Plex "dance" | Synology FAQ | PMS Release Announcements

    Hosts: Fedora, QNAP, Synology, most Linux distros in VM

    No technical support via PM unless offered

    Please remember to mark the appropriate answer(s) which solved your issue.

     
  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited January 5

    That has been in the logs since I built the server 5 months ago. I suspect its PMS attempting to check the OCSP of the cert that I have configured for the web interface. I don't know why it would fail but that is not the cause since its been there for months

    Edit:
    Correction the server stopped responding at 2:11AM. Specifically port 32400 stopped responding

    Edit2:
    Looking into the logs myself. I see that there were 2 streams going on and then they all of sudden stopped. There is no smoking gun that shows an error but that is what I have been experiencing for the last 18hrs... Things are working and all of sudden the web interface is no longer responding. I have 2 different monitoring systems and they are reporting the same thing. One is PlexPy and I have Custom rules setup in Zabbix.

    Edit3:
    This was me restarting the service as the server never recovers.

    Jan 05, 2018 10:26:46.790 [0x7f8317a35840] DEBUG - Shutting down with signal 15
    Jan 05, 2018 10:26:46.790 [0x7f8317a35840] DEBUG - Ordered to stop server.
    
  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited January 5

    On the OCSP error... it looks like to me that PMS thinks the response is Unauthorized but it reports a 200 response in the line above not a 401

    Jan 05, 2018 10:05:47.499 [0x7f82c2ffd700] DEBUG - HTTP requesting GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:05:47.549 [0x7f82c2ffd700] DEBUG - HTTP 200 response from GET http://ocsp.comodoca.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR64T7ooMQqLLQoy%2BemBUYZQOKh6QQUu69%2BAj36pvE8hI6t7jiY7NkyMtQCEH6ABREJ5Dk19t9AmJA7pJs%3D
    Jan 05, 2018 10:05:47.550 [0x7f82c2ffd700] ERROR - OCSP response error: unauthorized
    
  • ChuckPAChuckPA Members, Plex Pass, Plex Ninja, Plex Team Member Posts: 22,850 Plex Team Member

    I've checked with Engineering.

    The OSCP error is stemming from your system's cert. You need to find out why it's not working as expected. The Comodo CA is reporting it as invalid.
    This is the (401) unauthorized. The HTTP transaction is a 200 (valid) reply meaning Comodo CA replied to us as expected.

    I am seeing the system continuing to respond well after 2am.

    Do you have nightly maintenance scheduled to run at 2am? If PMS is running nightly media deep analysis, this will put a heavy load on the machine and has been known to make it really sluggish in responding.

    Could this be what you're seeing? I'm seeing Sync cleanup starting. at 02:11 and PMS continuing to run.

    Notice the "Held transaction too long" warning below. The system is getting loaded up.

    an 05, 2018 02:10:46.996 [0x7f83027fe700] DEBUG - Auth: authenticated user 15044776 as nikeworldc@gmail.com
    Jan 05, 2018 02:10:47.000 [0x7f82abbfb700] DEBUG - Request: [192.168.20.64:42892 (WAN)] GET /:/timeline?time=2436000&duration=3481120&state=buffering&ratingKey=124067&key=%2Flibrary%2Fmetadata%2F124067&guid=com.plexapp.agents.themoviedb%3A%2F%2F67557%2F2%2F5%3Flang%3Den&playQueueItemID=103712 (38 live) TLS GZIP Signed-in Token (nikeworldc@gmail.com)
    Jan 05, 2018 02:10:47.001 [0x7f82abbfb700] DEBUG - Client [4a4c1a1635be0ab7f5a4b0830d7ee325] reporting timeline state buffering, progress of 2436000/3481120ms for guid=com.plexapp.agents.themoviedb://67557/2/5?lang=en, ratingKey=124067 url=, key=/library/metadata/124067, containerKey=, metadataId=124067
    Jan 05, 2018 02:10:47.004 [0x7f82abbfb700] DEBUG - Play progress on 124067 'Up, Down and Round the Farm' - got played 2436000 ms by account 15044776!
    Jan 05, 2018 02:10:47.448 [0x7f82ab3fa700] DEBUG - Request: [192.168.20.64:58044 (Subnet)] GET /web/index.html (38 live) TLS Signed-in
    Jan 05, 2018 02:10:47.448 [0x7f82ab3fa700] DEBUG - Final path: /usr/lib/plexmediaserver/Resources/Plug-ins-fc63598ba/WebClient.bundle/Contents/Resources/index.html
    Jan 05, 2018 02:10:47.448 [0x7f82ab3fa700] DEBUG - Content-Length of /usr/lib/plexmediaserver/Resources/Plug-ins-fc63598ba/WebClient.bundle/Contents/Resources/index.html is 4238.
    Jan 05, 2018 02:10:47.449 [0x7f83027fe700] DEBUG - Completed: [192.168.20.64:58044] 200 GET /web/index.html (38 live) TLS 3ms 4238 bytes (pipelined: 227)
    Jan 05, 2018 02:10:49.405 [0x7f83027fe700] DEBUG - Auth: authenticated user 1 as WatchTowerPlex
    Jan 05, 2018 02:10:49.406 [0x7f82aabf9700] DEBUG - Request: [192.168.20.66:50394 (Subnet)] GET /status/sessions (39 live) TLS Signed-in Token (WatchTowerPlex)
    Jan 05, 2018 02:10:49.461 [0x7f83027fe700] DEBUG - Auth: authenticated user 1 as WatchTowerPlex
    Jan 05, 2018 02:10:49.462 [0x7f82aa3f8700] DEBUG - Request: [192.168.20.66:50396 (Subnet)] GET /status/sessions (40 live) TLS Signed-in Token (WatchTowerPlex)
    Jan 05, 2018 02:10:51.817 [0x7f83027fe700] DEBUG - Auth: authenticated user 1 as WatchTowerPlex
    Jan 05, 2018 02:10:51.818 [0x7f82a9bf7700] DEBUG - Request: [192.168.20.64:49024 (Subnet)] GET / (40 live) TLS GZIP Signed-in Token (WatchTowerPlex)
    Jan 05, 2018 02:10:52.928 [0x7f83027fe700] DEBUG - Auth: authenticated user 1 as WatchTowerPlex
    Jan 05, 2018 02:10:52.929 [0x7f82a93f6700] DEBUG - Request: [192.168.20.66:50398 (Subnet)] GET /library/recentlyAdded (41 live) TLS Page 0-9 Signed-in Token (WatchTowerPlex)
    Jan 05, 2018 02:10:53.199 [0x7f82a93f6700] WARN - SLOW QUERY: It took 220.000000 ms to retrieve 50 items.
    Jan 05, 2018 02:10:53.217 [0x7f82a93f6700] DEBUG - Setting container serialization range to [0, 9] (total=-1)
    Jan 05, 2018 02:10:53.234 [0x7f83027fe700] DEBUG - Completed: [192.168.20.66:50398] 200 GET /library/recentlyAdded (41 live) TLS Page 0-9 305ms 10879 bytes (pipelined: 1)
    Jan 05, 2018 02:10:53.954 [0x7f83027fe700] DEBUG - handleStreamRead code 335544539: short read
    Jan 05, 2018 02:10:54.116 [0x7f83027fe700] DEBUG - Auth: authenticated user 1 as WatchTowerPlex
    Jan 05, 2018 02:10:54.117 [0x7f82ab3fa700] DEBUG - Request: [192.168.20.222:63714 (Subnet)] GET /player/proxy/poll?deviceClass=pc&protocolVersion=1&protocolCapabilities=timeline%2Cplayback%2Cnavigation%2Cmirror%2Cplayqueues&timeout=1 (41 live) TLS GZIP Signed-in Token (WatchTowerPlex)
    Jan 05, 2018 02:10:54.118 [0x7f82ab3fa700] DEBUG - Beginning read from two-way stream.
    Jan 05, 2018 02:11:01.526 [0x7f83027fe700] DEBUG - Auth: authenticated user 1 as WatchTowerPlex
    Jan 05, 2018 02:11:01.526 [0x7f82a93f6700] DEBUG - Request: [192.168.20.66:50400 (Subnet)] GET /status/sessions (42 live) TLS Signed-in Token (WatchTowerPlex)
    Jan 05, 2018 02:11:01.528 [0x7f83027fe700] DEBUG - Auth: authenticated user 1 as WatchTowerPlex
    Jan 05, 2018 02:11:01.529 [0x7f82c37fe700] DEBUG - Request: [192.168.20.66:50402 (Subnet)] GET /status/sessions (42 live) TLS Signed-in Token (WatchTowerPlex)
    Jan 05, 2018 02:11:02.040 [0x7f83027fe700] DEBUG - Using X-Forwarded-For: 172.72.4.228 as remote address
    Jan 05, 2018 02:11:02.040 [0x7f83027fe700] DEBUG - Auth: authenticated user 15044776 as nikeworldc@gmail.com
    Jan 05, 2018 02:11:02.043 [0x7f82f8bff700] DEBUG - Request: [192.168.20.64:58044 (WAN)] GET /:/timeline?time=2436000&duration=3481120&state=buffering&ratingKey=124067&key=%2Flibrary%2Fmetadata%2F124067&guid=com.plexapp.agents.themoviedb%3A%2F%2F67557%2F2%2F5%3Flang%3Den&playQueueItemID=103712 (42 live) TLS GZIP Signed-in Token (nikeworldc@gmail.com)
    Jan 05, 2018 02:11:02.045 [0x7f82f8bff700] DEBUG - Client [4a4c1a1635be0ab7f5a4b0830d7ee325] reporting timeline state buffering, progress of 2436000/3481120ms for guid=com.plexapp.agents.themoviedb://67557/2/5?lang=en, ratingKey=124067 url=, key=/library/metadata/124067, containerKey=, metadataId=124067
    Jan 05, 2018 02:11:02.048 [0x7f82f8bff700] DEBUG - Play progress on 124067 'Up, Down and Round the Farm' - got played 2436000 ms by account 15044776!
    Jan 05, 2018 02:11:03.308 [0x7f83027fe700] DEBUG - Using X-Forwarded-For: 172.72.4.228 as remote address
    Jan 05, 2018 02:11:03.308 [0x7f83027fe700] DEBUG - Auth: authenticated user 15044776 as nikeworldc@gmail.com
    Jan 05, 2018 02:11:03.310 [0x7f82c17fe700] DEBUG - Request: [192.168.20.64:49099 (WAN)] GET /video/:/transcode/universal/ping?session=4a4c1a1635be0ab7f5a4b0830d7ee325 (44 live) TLS GZIP Signed-in Token (nikeworldc@gmail.com)
    Jan 05, 2018 02:11:03.311 [0x7f83027fe700] DEBUG - Using X-Forwarded-For: 172.72.4.228 as remote address
    Jan 05, 2018 02:11:03.311 [0x7f83027fe700] DEBUG - Auth: authenticated user 15044776 as nikeworldc@gmail.com
    Jan 05, 2018 02:11:03.311 [0x7f82c17fe700] DEBUG - Found session GUID of 4a4c1a1635be0ab7f5a4b0830d7ee325 in session start.
    Jan 05, 2018 02:11:17.572 [0x7f82dc7ff700] DEBUG - [CompanionProxy] player cqysv4o2fthdl5e1tmu7vl5y was last refreshed 10 seconds ago
    Jan 05, 2018 02:11:27.572 [0x7f82c5ffe700] DEBUG - [CompanionProxy] player cqysv4o2fthdl5e1tmu7vl5y was last refreshed 20 seconds ago
    Jan 05, 2018 02:11:36.040 [0x7f82ad3fe700] DEBUG - BPQ: onConsiderProcessing: Idle (true)
    Jan 05, 2018 02:11:36.040 [0x7f82ad3fe700] DEBUG - BPQ: [Idle] -> [Processing]
    Jan 05, 2018 02:11:36.061 [0x7f82ad3fe700] DEBUG - BPQ: generating queue items from 32 generator(s)
    Jan 05, 2018 02:11:36.067 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 19995379, sync item 23702875, generator 8368
    Jan 05, 2018 02:11:36.082 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 19995379, sync item 23702896, generator 8369
    Jan 05, 2018 02:11:36.097 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 19995379, sync item 24120813, generator 9810
    Jan 05, 2018 02:11:36.780 [0x7f82ad3fe700] WARN - Held transaction for too long (../Sync/SyncItemGenerator.cpp:142): 0.670000 seconds
    Jan 05, 2018 02:11:36.780 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 21357624, sync item 23291383, generator 6985
    Jan 05, 2018 02:11:37.331 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 21357624, sync item 23702914, generator 8371
    Jan 05, 2018 02:11:37.353 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 21357624, sync item 23702929, generator 8372
    Jan 05, 2018 02:11:37.373 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 21357624, sync item 23703077, generator 8373
    Jan 05, 2018 02:11:37.395 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 21357624, sync item 23816138, generator 8897
    Jan 05, 2018 02:11:37.416 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 21357624, sync item 23816141, generator 8898
    Jan 05, 2018 02:11:37.437 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 21357624, sync item 23816145, generator 8899
    Jan 05, 2018 02:11:37.459 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 21357624, sync item 23952230, generator 9366
    Jan 05, 2018 02:11:37.573 [0x7f82c2ffd700] DEBUG - [CompanionProxy] player cqysv4o2fthdl5e1tmu7vl5y was last refreshed 30 seconds ago
    Jan 05, 2018 02:11:37.955 [0x7f82ad3fe700] WARN - Held transaction for too long (../Sync/SyncItemGenerator.cpp:142): 0.470000 seconds
    Jan 05, 2018 02:11:37.955 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 19996689, sync item 22448245, generator 2844
    Jan 05, 2018 02:11:38.544 [0x7f82ad3fe700] WARN - Held transaction for too long (../Sync/SyncItemGenerator.cpp:142): 0.130000 seconds
    Jan 05, 2018 02:11:38.544 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 19996689, sync item 23097418, generator 4906
    Jan 05, 2018 02:11:38.683 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20906371, sync item 22829529, generator 3967
    Jan 05, 2018 02:11:38.690 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20906371, sync item 22829542, generator 3968
    Jan 05, 2018 02:11:38.696 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20906371, sync item 22829547, generator 3969
    Jan 05, 2018 02:11:38.703 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20906371, sync item 22829553, generator 3970
    Jan 05, 2018 02:11:38.710 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20906371, sync item 22829571, generator 3971
    Jan 05, 2018 02:11:38.717 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20906371, sync item 22829574, generator 3972
    Jan 05, 2018 02:11:38.723 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20906371, sync item 22829581, generator 3973
    Jan 05, 2018 02:11:38.730 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20906371, sync item 23834981, generator 8945
    Jan 05, 2018 02:11:38.736 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20906371, sync item 23835006, generator 8947
    Jan 05, 2018 02:11:38.743 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 19354740, sync item 23312903, generator 7039
    Jan 05, 2018 02:11:38.750 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 19354740, sync item 23725154, generator 8429
    Jan 05, 2018 02:11:38.756 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20119982, sync item 23715782, generator 8393
    Jan 05, 2018 02:11:38.763 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20119982, sync item 23715791, generator 8394
    Jan 05, 2018 02:11:38.769 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20119982, sync item 23814206, generator 8890
    Jan 05, 2018 02:11:38.776 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20119982, sync item 23814209, generator 8891
    Jan 05, 2018 02:11:38.783 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20119982, sync item 23814220, generator 8892
    Jan 05, 2018 02:11:38.789 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20119982, sync item 23814227, generator 8893
    Jan 05, 2018 02:11:38.796 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20119982, sync item 23814245, generator 8894
    Jan 05, 2018 02:11:38.802 [0x7f82ad3fe700] DEBUG - Sync: updating status for sync list 20119982, sync item 23814253, generator 8895
    Jan 05, 2018 02:11:38.806 [0x7f82ad3fe700] DEBUG - BPQ: generated 0 item(s) for queue
    Jan 05, 2018 02:11:38.806 [0x7f82ad3fe700] DEBUG - PlayQueue: 0 generated IDs compressed down to a 2 byte blob.
    Jan 05, 2018 02:11:38.806 [0x7f82ad3fe700] DEBUG - PlayQueue: 0 generated IDs compressed down to a 2 byte blob.
    Jan 05, 2018 02:11:38.807 [0x7f82ad3fe700] DEBUG - BPQ: [Processing] -> [Idle]
    

    This is where I am seeing I/O loading coming into play. This is a long time for 10,000 items on a Xeon processor.

    Jan 05, 2018 04:05:51.255 [0x7f82acbfd700] DEBUG - It took 2600.000000 ms to retrieve 10399 items.

    I concur with you that I'm not seeing any authentication requests from clients after 2am. I'm not a security expert by any means but in my experience, are the clients being denied at socket connect time (before being able to post the request?

    Please DISABLE Verbose logging until requested

    Please search before posting

    Primary support: Linux, Synology, and QNAP

    Please remember to report back. This benefits others.

    Useful links

     Installation and Basic Setup |  Media Preparation (How to name your media files)  |  Linux Permissions 

     Handling TV Specials | Handling Movie extras  |  Nas Compatibility

     Reporting Plex Server issues | Plex Media Server FAQ | Linux Tips

     

    Other useful guides: Local Subtitles | The Plex "dance" | Synology FAQ | PMS Release Announcements

    Hosts: Fedora, QNAP, Synology, most Linux distros in VM

    No technical support via PM unless offered

    Please remember to mark the appropriate answer(s) which solved your issue.

     
  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited February 27

    Do you have nightly maintenance scheduled to run at 2am? If PMS is running nightly media deep analysis, this will put a heavy load on the machine and has been known to make it really sluggish in responding.

    No this is scheduled for 3AM. Sluggish I get but is is out right dead and never comes back

    Could this be what you're seeing? I'm seeing Sync cleanup starting. at 02:11 and PMS continuing to run.

    I don't think so since syncs happen pretty frequently and they seem to finish quickly.

    Notice the "Held transaction too long" warning below. The system is getting loaded up.

    Right but would that cause the web interface to stop responding permanently?

    I was just able to reproduce the issue.
    Steps I took were:
    1) Setup 2 devices to stream content that is not live TV
    2) Started watching a live stream on a laptop.
    3) asked plex to start recording that same live stream.

    As soon as i did that the web interface became unresponsive.

    Only way to recover is restart the service.

    The box is really beefy and i dont see why performance issues could cause this.
    Edit -> 32 core allocated Dual Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
    12gb of ram
    10k SAS drives in RAID 10

    This server has been running flawlessly for months. It was not until i added the tuner did my issues occur.

    Logs are attached. Server died around 12:15

  • ChuckPAChuckPA Members, Plex Pass, Plex Ninja, Plex Team Member Posts: 22,850 Plex Team Member

    Please expand on?

    Edit -> 32 core allocated Dual Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz

    VM? Container?

    Please DISABLE Verbose logging until requested

    Please search before posting

    Primary support: Linux, Synology, and QNAP

    Please remember to report back. This benefits others.

    Useful links

     Installation and Basic Setup |  Media Preparation (How to name your media files)  |  Linux Permissions 

     Handling TV Specials | Handling Movie extras  |  Nas Compatibility

     Reporting Plex Server issues | Plex Media Server FAQ | Linux Tips

     

    Other useful guides: Local Subtitles | The Plex "dance" | Synology FAQ | PMS Release Announcements

    Hosts: Fedora, QNAP, Synology, most Linux distros in VM

    No technical support via PM unless offered

    Please remember to mark the appropriate answer(s) which solved your issue.

     
  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited January 5

    its a VM with shares set higher than all other VMs. Both on storage and CPU
    The other VMs have minimal use and usually maintain a steady state. Plex is by far the most utilized

    Host info
    Hypervisor: VMware ESXi, 6.5.0, 5310538
    Model: ProLiant DL360 Gen9
    Processor Type: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
    Logical Processors: 40
    Virtual Machines: 6
    RAM: 96GB

  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited February 27

    The OSCP error is stemming from your system's cert. You need to find out why it's not working as expected. The Comodo CA is reporting it as invalid.
    This is the (401) unauthorized. The HTTP transaction is a 200 (valid) reply meaning Comodo CA replied to us as expected.

    I checked the cert manually and OCSP is reporting a good cert. PMS may not be handling the response correctly.

    openssl ocsp -issuer chain.pem -cert cert.pem -text -url http://ocsp.comodoca.com
    OCSP Request Data:
        Version: 1 (0x0)
        Requestor List:
            Certificate ID:
              Hash Algorithm: sha1
              Issuer Name Hash: 7AE13EE8A0C42A2CB428CBE7A605461940E2A1E9
              Issuer Key Hash: 90AF6A3A945A0BD890EA125673DF43B43A28DAE7
              Serial Number: 7E80051109E43935F6DF4098903BA49B
        Request Extensions:
            OCSP Nonce:
                0410D4F7CC707DA5DD93979C4B2E431CBCEC
    OCSP Response Data:
        OCSP Response Status: successful (0x0)
        Response Type: Basic OCSP Response
        Version: 1 (0x0)
        Responder Id: 90AF6A3A945A0BD890EA125673DF43B43A28DAE7
        Produced At: Jan  3 11:20:07 2018 GMT
        Responses:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: 7AE13EE8A0C42A2CB428CBE7A605461940E2A1E9
          Issuer Key Hash: 90AF6A3A945A0BD890EA125673DF43B43A28DAE7
          Serial Number: 7E80051109E43935F6DF4098903BA49B
        Cert Status: good
        This Update: Jan  3 11:20:07 2018 GMT
        Next Update: Jan 10 11:20:07 2018 GMT
    
        Signature Algorithm: sha256WithRSAEncryption
            0c:33:51:da:e2:9d:ee:60:fc:0b:b1:69:dc:5b:e5:05:90:39:
            85:6a:fe:7b:a2:aa:0b:07:cc:83:56:b6:77:4d:3e:88:d4:de:
            45:39:fa:b4:5b:16:51:6d:4c:67:2c:57:79:01:35:62:22:6d:
            d6:69:07:b0:1b:f4:59:a9:7b:70:a4:75:c4:d2:f3:8e:0c:36:
            e8:b3:78:81:73:29:ff:7b:cc:73:5a:d9:55:1d:41:70:1f:37:
            e4:40:35:67:f2:1f:26:8f:0d:cf:71:a5:60:ff:21:df:1e:75:
            e9:5e:af:6e:db:d5:bf:6a:8c:1c:0f:31:4e:d0:4d:47:7f:05:
            dc:31:62:5b:9a:20:5b:59:57:b8:55:bc:54:7b:8f:a0:c9:4c:
            fa:9f:eb:c6:1f:db:a7:f6:0f:f8:08:36:3f:14:d4:d8:d6:d0:
            5e:c8:4b:b9:f1:0f:8f:b2:01:83:19:66:ff:6f:6a:a6:cf:bf:
            14:96:6b:3e:f4:7a:de:dd:ac:cd:4a:37:4e:8c:7c:ea:73:6c:
            ed:53:77:ae:c9:29:3b:09:d9:56:04:75:4a:9f:56:37:d9:1d:
            33:8f:22:08:5e:a3:5e:ad:be:88:0e:af:2b:69:69:20:c0:c7:
            37:2b:44:e7:43:79:7d:e6:76:4a:8c:ea:2a:fe:74:ea:58:71:
            39:4b:9e:8f
    cert.pem: good
        This Update: Jan  3 11:20:07 2018 GMT
        Next Update: Jan 10 11:20:07 2018 GMT
    
  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited January 5

    This is where I am seeing I/O loading coming into play. This is a long time for 10,000 items on a Xeon processor.

    Jan 05, 2018 04:05:51.255 [0x7f82acbfd700] DEBUG - It took 2600.000000 ms to retrieve 10399 items.

    I concur with you that I'm not seeing any authentication requests from clients after 2am. I'm not a security expert by any means but in my experience, are the clients being denied at socket connect time (before being able to post the request?

    Sorry I missed this in my response. I am no expert either. I would concur that for some reason the webservice is hanging and not accepting any requests.

    Is there any way to see the web logs? It seems that there is a process that is getting stuck that is related to what listens on 32400 but I am unsure how to track it down. It seems the server and background jobs continue to function but for all intents and purposes what clients use to communicate hangs and never comes back... Any suggestions? I am willing to troubleshoot the issue just out of my league on debugging PMS.

    As far as it taking a long time at 4AM I could see that as there are several backup jobs that run around that time on other servers but PMS web interface was long dead before that time.

    BTW: Thanks for the prompt responses. I hope this can get figured out.

    After removing the Tuner the server has been stable for 3 hrs... that is longest in the last 18hrs... 5 streams going strong no issues.

  • ChuckPAChuckPA Members, Plex Pass, Plex Ninja, Plex Team Member Posts: 22,850 Plex Team Member
    edited January 5

    Thanks for confirming the cert. Doing so supports what I've been considering and wanted to think through a bit before responding.

    Adding the tuner to the mix does one thing; it opens more sockets and the Plex Tuner Service process. Sockets to load the EPG and sockets as it's being used. Memory to run the PID.

    If a user device makes a request to PMS and the host OS instance is out of sockets, PMS can't perform the accept() socket call to talk to it.
    If it can't do that, it won't reply.
    No failed processes
    The symptom will be it appearing to have stopped responding to port 32400.
    Everything else underneath, which runs using signals and pids, continues to run. We see this in the logs.
    Remove the load of the network tuner communication and normal operation is restored.

    I am floating/asserting the hypothesis that the VM is running out of sockets, possibly aggravated by the VM guest OS starting to swap.
    If 32+ GB has been allocated to the Guest OS, swapping is no longer a consideration.

    I'm not very versed in ESXi but have never seen it run out of sockets. I have seen guest OS instances go crazy because of the VMNET3 adapter . Ubuntu can't handle it. Ubuntu requires the e1000e adapter emulation in order to run correctly due to its kernel driver bug

    The default max open sockets for TCP in Linux is 256 (tcp_somaxconn) by default. Default close time is 120 seconds (CLOSE_WAIT state)

    Normal PMS operations reuses sockets (by leveraging TCP-keepalive, etc) as much as possible to avoid running out of sockets.
    Tuner control operations are isolated transactions.

    The error messages from the OSCP communications, is a bit of red herring but also an indicator.

    This leaves us with the hypothesis the Guest OS is running out of sockets unless /etc/sysctl.conf has been modified to increase the default kernel limits.

    Thoughts?

    Please DISABLE Verbose logging until requested

    Please search before posting

    Primary support: Linux, Synology, and QNAP

    Please remember to report back. This benefits others.

    Useful links

     Installation and Basic Setup |  Media Preparation (How to name your media files)  |  Linux Permissions 

     Handling TV Specials | Handling Movie extras  |  Nas Compatibility

     Reporting Plex Server issues | Plex Media Server FAQ | Linux Tips

     

    Other useful guides: Local Subtitles | The Plex "dance" | Synology FAQ | PMS Release Announcements

    Hosts: Fedora, QNAP, Synology, most Linux distros in VM

    No technical support via PM unless offered

    Please remember to mark the appropriate answer(s) which solved your issue.

     
  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited January 5

    That makes a lot of sense and would explain the issue.

    Do you happen to know any commands to validate the limits and/or what needs to be added to /etc/sysctl.conf?

    I am researching the in the background on how to verify the limits and what should be changed. I just want to confirm things before I make changes that way we know exactly what the issue was.

  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass

    This is the output of ulimit -a

    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 47317
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 8192
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 47317
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
    

    This system is running defaults... I can't seem to find the TCP max of 256.

  • ChuckPAChuckPA Members, Plex Pass, Plex Ninja, Plex Team Member Posts: 22,850 Plex Team Member

    netstat is the tool to see what's going on at any instant . PMS only used TCP so you can ignore all the normal Unix-domain and UDP socket traffic.

    As supplemental info. the TCP default for a workstation config is 256 open sockets.

    What I keep handy in my /etc/sysctl.conf, to enable when running heavy 'server' apps.

    #--------------tcp tuning
    
    # Increase Read & Write memory  (
    #net.core.wmem_max=12582912
    #net.core.rmem_max=12582912
    
    # set minimum size (64K), initial size (512K), and maximum size
    #net.ipv4.tcp_rmem= 32768 262144 12582912
    #net.ipv4.tcp_wmem= 32768 262144 12582912
    
    # Enable scaling
    #net.ipv4.tcp_window_scaling = 1
    
    # RFC1323 timestamps
    #net.ipv4.tcp_timestamps = 1
    
    # Selective Acknowledgement
    #net.ipv4.tcp_sack = 1
    
    # disable metrics because sometimes can degrade performance 
    #net.ipv4.tcp_no_metrics_save = 1 
    
    # Increase backlog although HIGHLY doubt it will be needed
    #net.core.netdev_max_backlog = 5000
    
    # The maximum number of "backlogged sockets".  Default is 256.
    #net.core.somaxconn = 2048
    
    

    This is a bit dated but points you in the direction.
    https://tweaked.io/guide/kernel/

    Please DISABLE Verbose logging until requested

    Please search before posting

    Primary support: Linux, Synology, and QNAP

    Please remember to report back. This benefits others.

    Useful links

     Installation and Basic Setup |  Media Preparation (How to name your media files)  |  Linux Permissions 

     Handling TV Specials | Handling Movie extras  |  Nas Compatibility

     Reporting Plex Server issues | Plex Media Server FAQ | Linux Tips

     

    Other useful guides: Local Subtitles | The Plex "dance" | Synology FAQ | PMS Release Announcements

    Hosts: Fedora, QNAP, Synology, most Linux distros in VM

    No technical support via PM unless offered

    Please remember to mark the appropriate answer(s) which solved your issue.

     
  • ChuckPAChuckPA Members, Plex Pass, Plex Ninja, Plex Team Member Posts: 22,850 Plex Team Member
    edited January 5

    ulimit is for processes (user space).

    sysctl.conf is kernel space. >:)

    Please DISABLE Verbose logging until requested

    Please search before posting

    Primary support: Linux, Synology, and QNAP

    Please remember to report back. This benefits others.

    Useful links

     Installation and Basic Setup |  Media Preparation (How to name your media files)  |  Linux Permissions 

     Handling TV Specials | Handling Movie extras  |  Nas Compatibility

     Reporting Plex Server issues | Plex Media Server FAQ | Linux Tips

     

    Other useful guides: Local Subtitles | The Plex "dance" | Synology FAQ | PMS Release Announcements

    Hosts: Fedora, QNAP, Synology, most Linux distros in VM

    No technical support via PM unless offered

    Please remember to mark the appropriate answer(s) which solved your issue.

     
  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass

    Ok makes sense. I am no linux guru... I just know enough to screw things up! :smiley:

    I made these changes:

    kernel.pid_max = 4194303
    net.ipv4.ip_local_port_range = 1024 65535
    kernel.sched_migration_cost_ns = 5000000
    kernel.sched_autogroup_enabled = 0
    net.ipv4.tcp_slow_start_after_idle = 0
    net.ipv4.tcp_no_metrics_save = 0
    net.ipv4.tcp_abort_on_overflow = 0
    net.ipv4.tcp_window_scaling = 1
    net.ipv4.tcp_tw_recycle = 1
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_syn_retries = 2
    net.ipv4.tcp_synack_retries = 2
    net.ipv4.tcp_orphan_retries = 2
    net.ipv4.tcp_retries2 = 8
    net.core.netdev_max_backlog = 3240000
    net.core.somaxconn = 50000
    net.ipv4.tcp_max_tw_buckets = 1440000
    net.ipv4.tcp_max_syn_backlog = 3240000
    net.core.rmem_default = 16777216
    net.core.wmem_default = 16777216
    net.core.optmem_max = 16777216
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.tcp_mem = 16777216 16777216 16777216
    net.ipv4.tcp_wmem = 4096 87380 16777216
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_keepalive_time = 600
    net.ipv4.tcp_keepalive_intvl = 10
    net.ipv4.tcp_keepalive_probes = 9
    net.ipv4.tcp_fin_timeout = 7
    

    I based them off this guide...
    http://www.queryadmin.com/1654/tuning-linux-kernel-tcp-parameters-sysctl/

    I also added the noatime to my nfs shares as that might improve the media scan times.

    I have the tuner configured and running. So far so good. I will report back tomorrow/sooner depending on if the issue continues.

    Thanks again for all your insight. I really appreciate you taking the time!!!

  • ChuckPAChuckPA Members, Plex Pass, Plex Ninja, Plex Team Member Posts: 22,850 Plex Team Member

    The one parameter you're missing there is

    # The maximum number of "backlogged sockets".  Default is 256.
    net.core.somaxconn = 2048
    

    Please DISABLE Verbose logging until requested

    Please search before posting

    Primary support: Linux, Synology, and QNAP

    Please remember to report back. This benefits others.

    Useful links

     Installation and Basic Setup |  Media Preparation (How to name your media files)  |  Linux Permissions 

     Handling TV Specials | Handling Movie extras  |  Nas Compatibility

     Reporting Plex Server issues | Plex Media Server FAQ | Linux Tips

     

    Other useful guides: Local Subtitles | The Plex "dance" | Synology FAQ | PMS Release Announcements

    Hosts: Fedora, QNAP, Synology, most Linux distros in VM

    No technical support via PM unless offered

    Please remember to mark the appropriate answer(s) which solved your issue.

     
  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited January 5

    Isn't this it?

    kernel.pid_max = 4194303
    net.ipv4.ip_local_port_range = 1024 65535
    kernel.sched_migration_cost_ns = 5000000
    kernel.sched_autogroup_enabled = 0
    net.ipv4.tcp_slow_start_after_idle = 0
    net.ipv4.tcp_no_metrics_save = 0
    net.ipv4.tcp_abort_on_overflow = 0
    net.ipv4.tcp_window_scaling = 1
    net.ipv4.tcp_tw_recycle = 1
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_syn_retries = 2
    net.ipv4.tcp_synack_retries = 2
    net.ipv4.tcp_orphan_retries = 2
    net.ipv4.tcp_retries2 = 8
    net.core.netdev_max_backlog = 3240000
    --->>>net.core.somaxconn = 50000<<<----
    net.ipv4.tcp_max_tw_buckets = 1440000
    net.ipv4.tcp_max_syn_backlog = 3240000
    net.core.rmem_default = 16777216
    net.core.wmem_default = 16777216
    net.core.optmem_max = 16777216
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.tcp_mem = 16777216 16777216 16777216
    net.ipv4.tcp_wmem = 4096 87380 16777216
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_keepalive_time = 600
    net.ipv4.tcp_keepalive_intvl = 10
    net.ipv4.tcp_keepalive_probes = 9
    net.ipv4.tcp_fin_timeout = 7
    

    I just set it higher than 2048...

  • ChuckPAChuckPA Members, Plex Pass, Plex Ninja, Plex Team Member Posts: 22,850 Plex Team Member

    You're right. I missed it!

    50,000 isn't needed Are you planning on being Amazon's main server ? :D

    Please DISABLE Verbose logging until requested

    Please search before posting

    Primary support: Linux, Synology, and QNAP

    Please remember to report back. This benefits others.

    Useful links

     Installation and Basic Setup |  Media Preparation (How to name your media files)  |  Linux Permissions 

     Handling TV Specials | Handling Movie extras  |  Nas Compatibility

     Reporting Plex Server issues | Plex Media Server FAQ | Linux Tips

     

    Other useful guides: Local Subtitles | The Plex "dance" | Synology FAQ | PMS Release Announcements

    Hosts: Fedora, QNAP, Synology, most Linux distros in VM

    No technical support via PM unless offered

    Please remember to mark the appropriate answer(s) which solved your issue.

     
  • WatchTowerPlexWatchTowerPlex Validating, Plex Pass Posts: 242 Plex Pass
    edited February 27

    Go for broke? hahah!

    Well that was short lived anyway because the server just stopped responding... :(

    Server stopped responding at 17:18 Restart at 17:20

    Edit:
    After the changes above I restarted the VM... just to be sure.

«13456
Sign In or Register to comment.