×
  • Sign In
  • Sign In



    Or sign in with one of these services

  • Sign Up
Jump to content

Rhododendron

Administrators
  • CL
  • D
  • M
  • Content Count

    4318
  • Joined

  • Last visited

  • Days Won

    64

Posts posted by Rhododendron


  1. Seattle will work better than any other west coast location, if those clients are your goal. But it's crucial to distinguish between east and west clients to use that location properly.The internet exchange there plays a very significant role in stable and consistent download speeds. In the past steam has used NFO at this location for distributing game downloads. (they might still do it, i'm not sure though)

     

    Latency will only effect download speeds by seconds via throughput (for 100's of files), but routes will can add minutes if there isn't a stable link for the client.

    You just asked about New York though, and I can only add two nodes since the cluster number has to be odd to avoid split-brain scenarios.

     

    Honestly next step is gonna be setting up an async endpoint for the cluster since write speeds aren't very fast.


  2. @Rhododendron

    Is there any chance you could move the Seattle VPS in the cluster to New York?

     

    It would be great if the higher routing quality, could be put to good use for content distribution improvements; specially sending things over fastdl, for higher amounts of stability. A more significant Internet exchange point combined with additional transit options are key optimization features here. This is the core reason why many websites prefer to be hosted here and ISPs tend to optimize this location before others.

     

    Interconnectivity isn't the best in Chicago or Seattle when using it specially for the purpose of web servers. While it may be good for game servers, actually sending content to clients is a different story. There isn't nearly as many routes in these locations to reach clients; which is a core concept that web servers thrive on to be successful. And a lot of Chicago routes favor traffic shaping which decelerates download speeds (thanks to NFO's emphasis on DDOS mitigation and UDP game-server traffic). Given the fact there's less transit/peering options, downloads are more likely to take a hit from congestion.

     

    A good suggestion for Teamspeak if you want to operate better on a world-wide scale, would be to test it out in New York. Much like web-servers, latency isn't nearly as big of an objective as routing quality. Stability and clarity trumps a 1/25th of a second delay in voice transmission. Less hops for European and South American clients plus more routes for everyone is a win-win scenario. For similar reasons, this location is of interest to myself for sending server content.

    I'm moving Teamspeak to NY as you're right it does make more sense.

     

    However the Seattle server I think should be moved more south as Chicago is well suited for people on the east coast.


  3. 15% is huge when it comes to single threaded strength on game-servers. And faster memory would likely boost other elements aswell.

     

    It may even be higher than 15% as the Intel architecture improvements from Sandy Bridge to Haswell are pretty substantial.

    We'll see next month.


  4. A VPS will slow down the server. And if it's with anther hosting provider, there won't be a managed control panel to operate the server.

     

    IMHO, it makes more sense to just upgrade what you have currently. Anther VPS would cost $20 anyways for less performance.

    It would have to wait until next month at the earliest as it was paid for this month.


  5. @Rhododendron

     

    One or more of the servers on the main box is consuming large amount of disk i/o; which is dramatically decelerating the speed at which the others can change level. It's abnormal for this to happen and is not by any means 'justified' when running on a solid state drive. Anywhere from 15 to 20 seconds is being added to level changes during peak hours on Morbus. I would like to get the bottom of this issue as soon possible.

     

    If this issue cannot be pinpointed on our end, I would like to request a passworded GMOD test server be put on the main box to make use of a stock configuration. (no excuses for slow level speeds) This will give all the needed tools to escalate this issue to the hosting provider; to go into the machine and find the culprit.

     

    A ticket I've opened with the hosting provider is pending a response, for John to go through the machine.

    What will you use to test the speed as you don't have CLI access.


  6. Availability will depend on the size of your machine. They are significantly more likely to have room for a two core VPS.

    OP4 is already on that machine, but they are moving OP2 over to it as well.


  7. @Rhododendron

    What processor is the Chicago VPS using currently? Could it be the slower hard-drive? SSD caches and ddr4 ram was not introduced until the e5-2697v3.

     

    If it's using the e5-2690v2, could a move be requested via support ticket to an e5-2697v3?

    I've asked them twice now and they said they don't have any available.


  8. @Rhododendron How does the proxy work? Does it have to check op1 each time? Is there something else slowing down processing time?

     

    The download time is faster now in Chicago but it's still 21.46s slower than expected.

    It might be Route 53s DNS. There are reports that it can be slow, but it's the only thing we can afford currently and it's doing everything we need.

     

    The proxy checks if the file is on the server locally, then if it's not, it will grab it from NFOServers.


  9. @Rhododendron

     

    The new CDN based fastdl is copying compressed mp3 files (in .bz2) over from the op1 websync. This is obsessively raising download times.There is no benefit in sending these files to clients compressed. The size remains constant and additional resources and time are required to decompress them. MP3 is already a compressed audio format.

     

    I ran a download test on Morbus with 191 files at two locations. (Chicago and London) London had these improvements and was significantly faster at double the latency.

     

    Chicago: 125s (approximately 7.64s is from the 40ms latency)

    London: 110s (approximately 18.145s is from the 95ms latency)

     

    Therefore, Chicago took 25s longer to download than expected.

    I'll look into it.


  10. If the cpu usage isn't topping 50%, extra cores would be pretty pointless honestly.

     

    The free memory usage should stay well above two gigabytes though, to leave room for expansion (population or server starting) and operating system buffers/caches.

    Buffer only needs like 500mb.

     

    They said there are no available machines.


  11. >L 07/14/2016 - 00:00:49: [sM]   [0]  Line 336, jailbreak.sp::ButtonPressed()
    L 07/14/2016 - 00:01:01: [sM] Plugin encountered error 4: Invalid parameter or parameter type
    L 07/14/2016 - 00:01:01: [sM] Native "PrintToConsole" reported: String formatted incorrectly - parameter 4 (total 3)
    L 07/14/2016 - 00:01:01: [sM] Displaying call stack trace for plugin "jailbreak/jailbreak.smx":
    L 07/14/2016 - 00:01:01: [sM]   [0]  Line 336, jailbreak.sp::ButtonPressed()

     

    Not sure if this has anything to do with it, but this error appears pretty often.

    I can fix it but if the entire plugin wasn't working it will break again.