--- | Log | opened Mon Dec 30 00:00:46 2019 |
01:25 | -!- | chatpeople [~oftc-webi@119.148.4.18] has joined #linode |
01:25 | -!- | chatpeople is "OFTC WebIRC Client" on #linode |
01:26 | <chatpeople> | hello |
01:26 | <chatpeople> | anybody there for help |
01:28 | <millisa> | !ask |
01:28 | <linbot> | If you have a question, feel free to just ask it -- someone's always willing to help. If you don't get a response right away, be patient! You may want to read http://alexfornuto.com/how-to-ask-for-help-on-irc/ |
01:31 | <chatpeople> | i want to increase my linode disk size |
01:31 | <chatpeople> | how to do that |
01:31 | <chatpeople> | @millisa |
01:31 | <millisa> | Disk allocation is tied to the plan size. |
01:31 | <millisa> | There's also the block storage that can be added on |
01:32 | <millisa> | Info on block storage is here: https://www.linode.com/docs/platform/block-storage/how-to-use-block-storage-with-your-linode/ |
01:32 | <millisa> | If you haven't allocated all the disk in your plan, you can increase the disk following this guide: https://www.linode.com/docs/quick-answers/linode-platform/resize-a-linode-disk/ |
01:32 | <chatpeople> | you mean the volume? option? |
01:33 | <millisa> | Info on resizing a linode to one that has a larger disk allocation would be here: https://www.linode.com/docs/platform/disk-images/resizing-a-linode/ |
01:36 | <chatpeople> | you are telling i have to change my plan right? |
01:36 | <chatpeople> | withotut changing the plan, i can not increase disk only |
01:36 | <millisa> | If you have allocated all the disk and you don't want to add on block storage, yes? |
01:37 | <chatpeople> | i want to stick with current plan but want to increase the disk size only |
01:40 | <chatpeople> | Would you like the disk CentOS 7 Disk to be automatically scaled with this Linode's new size? We recommend you keep this option enabled. |
01:40 | <chatpeople> | Additional verification is required to add this service. Please open a support ticket. |
01:41 | <chatpeople> | when i am going to resizing the plan it is showing Additional verification is required to add this service. Please open a support ticket. |
01:41 | <millisa> | Ok |
01:44 | <chatpeople> | what shall i do |
01:45 | <millisa> | I'm assuming you'd open a support ticket |
01:47 | -!- | thiras [~thiras@195.174.212.70] has quit [Ping timeout: 480 seconds] |
01:50 | <chatpeople> | hi |
01:51 | <millisa> | er, hi |
01:51 | <chatpeople> | can you help |
01:51 | <chatpeople> | are you from support team |
01:52 | <millisa> | !ops |
01:52 | <linbot> | Users with ops are employees of Linode, and know what they're talking about. The rest of us are the ever-so-helpful(?) community. Official Linode contact information: https://www.linode.com/contact |
01:53 | <chatpeople> | hi millisa |
01:53 | <millisa> | hi, again |
01:54 | <chatpeople> | you from linode support team |
01:55 | <millisa> | Users with ops are employees. The rest of us are users. |
01:57 | <chatpeople> | understood |
01:57 | <chatpeople> | can you tell me "why is is showing" |
01:57 | <chatpeople> | "Additional verification is required to add this service. Please open a support ticket." |
01:57 | <chatpeople> | why it is showing when i am going to resize |
01:58 | <millisa> | Probably because they need extra verification. I'd guess you have a new account and they want to be sure you aren't creating a large bill that you won't expect |
01:59 | <chatpeople> | i got it |
02:02 | <chatpeople> | nothing working |
02:02 | <chatpeople> | Linode has allocated more disk than the new service plan allows. Delete or resize disks smaller. |
02:02 | <chatpeople> | i think most of the given information are fake |
02:02 | <chatpeople> | i was trying to go to lower plan |
02:02 | <chatpeople> | i did not consume much spaces but it is showing different |
02:02 | <millisa> | Your disk has to be small enough to fit into the smaller plan |
02:08 | <chatpeople> | sometimes it is showing Linode applications-master has been booted by the Lassie watchdog |
02:08 | <chatpeople> | what is lassie watchdog |
02:08 | <chatpeople> | will it reboot the server |
02:08 | <millisa> | Linode Autonomous System Shutdown Intelligent rEbooter |
02:09 | <millisa> | here's a 12 year old article on it. https://www.linode.com/2007/10/26/lassie-the-shutdown-watchdog/ |
02:09 | <chatpeople> | haha |
02:09 | <chatpeople> | just want to know |
02:09 | <chatpeople> | will it reboot the server or |
02:11 | <millisa> | they briefly mention it here: https://www.linode.com/docs/uptime/monitoring-and-maintaining-your-server/#configure-shutdown-watchdog |
02:12 | <chatpeople> | i have one more question |
02:12 | <chatpeople> | how to recover the server if it crashes |
02:12 | <chatpeople> | basically we are very new into this data center |
02:12 | <chatpeople> | still learning many things |
02:13 | <chatpeople> | particularly previous all data and how much time it required to shift to new server with all previous data |
02:13 | <chatpeople> | "particularly previous all data and how much time it required to shift to new server with all previous data" |
02:13 | <millisa> | there's the lish console if you need to see what it looks like from the console. there's also a rescue mode you can put the system in. Details here: https://www.linode.com/docs/troubleshooting/rescue-and-rebuild/ |
02:14 | <millisa> | Details on lish are here: https://www.linode.com/docs/platform/manager/using-the-linode-shell-lish/ |
02:15 | <millisa> | consider buying their backup addon - with that you can restore one of the backups into a linode if necessary |
02:16 | <chatpeople> | yes taking backup for each server |
02:19 | <chatpeople> | can you tell me |
02:19 | <chatpeople> | how to deal with backup |
02:19 | <chatpeople> | any tutorial |
02:19 | <chatpeople> | i mean when and how backup will help me at what condition |
02:19 | <millisa> | https://www.linode.com/docs/platform/disk-images/linode-backup-service/ |
02:41 | <chatpeople> | one help |
02:54 | -!- | AugustusCaesar24 [~AugustusC@99-190-112-116.lightspeed.irvnca.sbcglobal.net] has joined #linode |
02:54 | -!- | AugustusCaesar24 is "Augustus Caesar" on #linode |
03:26 | -!- | chatpeople [~oftc-webi@119.148.4.18] has quit [Remote host closed the connection] |
03:57 | -!- | AugustusCaesar24 [~AugustusC@99-190-112-116.lightspeed.irvnca.sbcglobal.net] has quit [Quit: Going offline, see ya! (www.adiirc.com)] |
07:05 | -!- | Juma [~amir@77.139.185.199] has joined #linode |
07:05 | -!- | Juma is "Amir Uri" on #linode |
07:22 | -!- | fstd_ [~fstd@xdsl-87-78-190-112.nc.de] has joined #linode |
07:22 | -!- | fstd_ is "fstd" on #oftc #linode #debian #kernelnewbies |
07:30 | -!- | fstd [~fstd@xdsl-87-78-140-227.nc.de] has quit [Ping timeout: 480 seconds] |
07:51 | <linbot> | New news from community: How do I fix system issues after upgrade to Centos 7.7 <https://www.linode.com/community/questions/19262> |
08:03 | -!- | pawan [~oftc-webi@51.235.118.48] has joined #linode |
08:03 | -!- | pawan is "OFTC WebIRC Client" on #linode |
08:04 | <pawan> | Hi! My domain is msh.com.sa. My mail server on another server. I want to route the mails to that server. Could you please help . |
08:06 | <pawan> | ?? |
08:07 | <grawity> | just add a MX record that points to the new server's domain name |
08:07 | <grawity> | oh yeah and update the SPF record (the TXT "v=spf1" one) with the new server's addresses |
08:08 | <pawan> | And what abt A record? |
08:09 | <nate> | Question to fellow PCI-DSS people, if you're doing any sort of card transactions (storing them or not even), user accounts are supposed to have passwords correct and follow the minimum PCI-DSS guidelines right? |
08:09 | <grawity> | the only A/AAAA records that are used for email delivery are those named by MX records |
08:09 | <nate> | Pretty sure that hasn't changed at all with recent PCI-DSS specifications but I haven't really reviewed them in a while |
08:09 | <grawity> | so if you have MX pointing to "mail.msh.com.sa.", then only the mail.msh.com.sa A/AAAA records are used |
08:11 | <pawan> | Got it!! Thanks a lot. |
08:13 | -!- | pawan [~oftc-webi@51.235.118.48] has quit [Quit: Page closed] |
08:37 | -!- | Juma [~amir@77.139.185.199] has quit [Ping timeout: 480 seconds] |
08:50 | -!- | Juma [~amir@77.139.185.199] has joined #linode |
08:50 | -!- | Juma is "Amir Uri" on #linode |
08:56 | -!- | eyepulp [~eyepulp@107.152.3.83] has joined #linode |
08:56 | -!- | eyepulp is "eyepulp" on #linode |
09:00 | -!- | Juma [~amir@77.139.185.199] has quit [Ping timeout: 480 seconds] |
09:46 | -!- | w_ [~w@24.133.136.184] has joined #linode |
09:46 | -!- | w_ is "Alper Akyıldız" on #linode |
09:47 | -!- | w_ [~w@24.133.136.184] has left #linode [] |
10:45 | <linbot> | New news from community: Is it appropriate to use Linode object storage for web app back ends? <https://www.linode.com/community/questions/19263> |
10:49 | -!- | DrJ [~asdf@li1303-21.members.linode.com] has joined #linode |
10:49 | -!- | DrJ is "DrJ" on #linode |
10:57 | <linbot> | New news from community: NodeBalancer support for PROXY protocol <https://www.linode.com/community/questions/19264> |
11:45 | -!- | mwp [sid206092@2001:67c:2f08:5::3:250c] has joined #linode |
11:45 | -!- | mwp is "Mike Pastore" on #linode |
11:49 | <mwp> | Hi folks, I have a Linode with two dedicated CPUs, and I'd like to set the default CPU affinity to vcpu #1 so I can manually put some other processes on vcpu #2. Ubuntu Server 18.04 LTS with the current 64-bit Linode kernel. I set CPUAffinity=1 in /etc/systemd/system.conf, executed `systemctl daemon-reload`, and rebooted, but SystemD is still putting some processes on vcpu #2. Any suggestions? Thank you in advance. |
12:22 | <linbot> | New news from community: Server not responding after reboot. <https://www.linode.com/community/questions/19266> || Network Transfer Question? <https://www.linode.com/community/questions/19265> |
12:28 | -!- | shubhamkjha [~shubhamkj@47.9.212.146] has joined #linode |
12:28 | -!- | shubhamkjha is "ShubhamKJha" on #linode |
12:32 | <shubhamkjha> | Hi |
12:32 | <millisa> | Greetings |
12:33 | <shubhamkjha> | I am trying to configure irc for the first time |
12:33 | <shubhamkjha> | :) |
12:33 | <millisa> | It would appear you are having some success |
12:34 | <shubhamkjha> | true |
12:34 | <shubhamkjha> | looks too much geeky though |
12:35 | -!- | shubhamkjha [~shubhamkj@47.9.212.146] has left #linode [] |
12:36 | <millisa> | Too Much Geeky since 1988 |
12:37 | <Peng_> | D: |
12:51 | <@rgerke> | mwp: Sorry it took me a bit to respond, but I do not have an answer for you. I've brought this up to the Support team to see if we can come up with a suggestion for you. |
12:51 | <mwp> | Great! Thanks @rgerke |
12:51 | <mwp> | I might have to set isolcpus on the kernel command line but it doesn't look like that's possible with the Linode kernel |
12:52 | <mwp> | and I'm loathe to run and maintain grub and a distro kernel |
12:53 | <@rgerke> | mwp: Hang tight for a bit - I'll have something for you as soon as I can! |
12:53 | <mwp> | tks |
12:56 | -!- | Shentino_ [~desktop@96-41-208-125.dhcp.elbg.wa.charter.com] has joined #linode |
12:56 | -!- | Shentino_ is "realname" on #qemu #mm #linode #tux3 |
12:58 | -!- | Shentino [~desktop@96-41-208-125.dhcp.elbg.wa.charter.com] has quit [Ping timeout: 480 seconds] |
13:01 | <dwfreed> | mwp: pretty sure that value is 0-indexed, so you're telling systemd to put all processes on the second core, and it's doing exactly what you asked; look at /proc/<pid>/status for the Cpus_allowed_list value |
13:01 | <mwp> | ooooh |
13:03 | <@jackley> | hmmm I think that might be right? |
13:03 | <@jackley> | http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html |
13:03 | <dwfreed> | jackley: look at the example |
13:03 | <dwfreed> | it gives 0-based processor IDs |
13:04 | <dwfreed> | also references /proc/cpuinfo, which is also 0-based |
13:04 | <@jackley> | ya was just looking at that! |
13:04 | <mwp> | hrrm I think the syscall is zero indexed but the systemd parameter is one indexed |
13:04 | <dwfreed> | but, for fun, I can spin up a systemd service with CPUAffinity set |
13:04 | <dwfreed> | grep Cpus_allowed_list /proc/*/status | sort | uniq -c |
13:04 | <@jackley> | There are various ways of determining the number of CPUs available on the system, including: inspecting the contents of /proc/cpuinfo; using sysconf(3) to obtain the values of the _SC_NPROCESSORS_CONF and _SC_NPROCESSORS_ONLN parameters; and inspecting the list of CPU directories under /sys/devices/system/cpu/. |
13:05 | <dwfreed> | after you tell grep not to print the filename... |
13:05 | <dwfreed> | (-h) |
13:05 | <@jackley> | `cat /proc/cpuinfo` lists my first CPU as 0 |
13:07 | <mwp> | it's definitely cpu0 and cpu1 |
13:08 | <dwfreed> | note that some processes may adjust their affinity themselves |
13:08 | <dwfreed> | nginx can do this, for example |
13:08 | <@jackley> | https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html |
13:08 | <@jackley> | ndividual services may override the CPU affinity for their processes with the CPUAffinity= setting in unit files, see systemd.exec(5). |
13:08 | <@jackley> | ya – what dwfreed said |
13:09 | <mwp> | yup aware of that bit |
13:13 | <dwfreed> | CPUAffinity= is definitely 0-indexed as well |
13:14 | <dwfreed> | just tested it with a simple service |
13:14 | <@jackley> | just noodling around on a dedicated Linode that I set up to test – setting CPUAffinity=0 seems to have done it for me. |
13:14 | <@jackley> | everything is on CPU 0 |
13:14 | <dwfreed> | :) |
13:14 | <@jackley> | !point dwfreed |
13:14 | <linbot> | jackley: Point given to dwfreed. (90) (Biggest fan: mcintosh, total: 17) |
13:15 | <dwfreed> | grep -h Cpus_allowed_list /proc/*/task/*/status | sort | uniq -c |
13:15 | <dwfreed> | ^ to see the count of all the different affinity lists; it is a per-thread property |
13:20 | <mwp> | so this is definitely moving me in the right direction |
13:21 | <mwp> | the problem now is that 91 tasks show Cpus_allowed_list 0-1 |
13:21 | <mwp> | these are the processes not started by SystemD, so the CPUAffinity= directive does not apply |
13:21 | <mwp> | and the solution (AFAICS) is to set isolcpus on the command line |
13:22 | <dwfreed> | yes |
13:22 | <dwfreed> | that'll set the default affinity, and then individual processes can override it |
13:22 | <dwfreed> | should take that 91 down quite a bit, at the very least |
13:22 | <mwp> | so is there any way to do that on the linode kernel? or do I have to install grub and a distro kernel? |
13:23 | <dwfreed> | you'd have to use distro kernel |
13:23 | <dwfreed> | (or custom) |
13:23 | <mwp> | gotcha |
13:23 | <@jackley> | tbh maintaining the distro kernel is just as easy as Linode's, once it's set up |
13:23 | <mwp> | okay thanks all... dwfreed jackley rgerke |
13:23 | <mwp> | much appreciated |
13:23 | <@jackley> | (the distro kernel is our default for new images) |
13:23 | <@jackley> | np! |
13:23 | <dwfreed> | you don't need to install grub as the bootloader, but you do need the pieces that generate the /boot/grub/grub.cfg |
13:23 | <mwp> | ahh gotcha |
13:24 | <mwp> | yeah I hear you about the distro kernel, there's just something oddly reassuring about using the linode kernel for everything |
13:24 | <@jackley> | fair enough |
13:24 | <dwfreed> | (you'd only need to install grub as a bootloader if you were using direct disk boot; there's the GRUB "kernel" option, which uses a Linode-provided GRUB to do the bootloader bits and just uses your /boot/grub/grub.cfg for the config) |
13:24 | <@jackley> | (if you needed a guide, here's one https://www.linode.com/docs/platform/how-to-change-your-linodes-kernel/) |
13:25 | <mwp> | also ubuntu hwe is still at 5.0 only |
13:25 | <mwp> | which would be the low maint sol'n |
13:27 | -!- | CodeMouse92 [~JasonMc92@00025241.user.oftc.net] has joined #linode |
13:27 | -!- | CodeMouse92 is "Jason C. McDonald" on #packaging #linode #c++ |
13:30 | <linbot> | New news from community: k8s volume reattachment takes 7 minutes after node disconnect! <https://www.linode.com/community/questions/19267> |
14:04 | <grawity> | dwfreed: actually, are those pieces still needed anymore? IIRC nearly all of them were to do with Lish acting as a serial console, and that's kind of irrelevant when Glish exists |
14:20 | -!- | eyepulp [~eyepulp@107.152.3.83] has quit [Remote host closed the connection] |
14:52 | <linbot> | New news from community: Kubernetes Service type LoadBalancer does not get external IP <https://www.linode.com/community/questions/19268> |
15:15 | -!- | reiindear [~oftc-webi@2604:6000:cc41:4700:7cfe:839b:4b5b:cab5] has joined #linode |
15:15 | -!- | reiindear is "OFTC WebIRC Client" on #linode |
15:17 | <reiindear> | Hi there I'm interested in using Linode and wondering if anyone can tell me how hard/easy it is to upgrade a virtual machine to a higher level once I've set it up. Like can I press a button to move from a $10 a month server to a $20 a month one and keep everything I've already done or will I need to reinistall everything on the new server? |
15:18 | <reiindear> | Thank you for any info on this. I'm sure it's written somewhere on the site but I cant seem to find it in the plan information |
15:19 | <@jackley> | reiindear: you can press a few buttons and just move to a larger (or smaller) plan -> https://www.linode.com/docs/platform/disk-images/resizing-a-linode/ |
15:20 | <reiindear> | jackley: thank you |
15:20 | <@jackley> | reiindear: you're welcome! |
15:22 | <reiindear> | unrelated but I've found a possible bug on the sales page https://www.linode.com/products/standard-linodes/. Underneath "Solutions to easily deploy and scale " there are a number of tabs " One-Click Apps StackScripts Distributions " and " Storage Networking Tools Services" which are not clickable or at the very least don't change the visible tab when clicked. |
15:23 | <reiindear> | I have tested this in firefox 66.0.3 (64-bit) and chrome 79.0.3945.88. firefox I enabled all scripts and chrome is a vanilla install |
15:23 | <reiindear> | Either way thank you all this is by far one of the best irc channels I've visited in a while |
15:27 | <@jackley> | hmmmm |
15:27 | <@jackley> | reiindear: meaning when you click on "Stackscripts", nothing happens? that's what I'm experiencing (Firefox 70) |
15:27 | <@jackley> | reiindear: and thank you! <3 |
15:29 | <reiindear> | yes that's the behavior I experience as well |
15:30 | <@jackley> | seems like a problem. i'll pass it along, thanks for spotting & reporting that! |
15:30 | <reiindear> | you're welcome |
15:31 | -!- | reiindear [~oftc-webi@2604:6000:cc41:4700:7cfe:839b:4b5b:cab5] has quit [Quit: Page closed] |
16:14 | <linbot> | New news from community: What do I do if I see AVC denial errors? <https://www.linode.com/community/questions/19269> |
16:20 | <linbot> | New news from blog: 2019: A Big Year of Innovation for Linode and a Big Thank You to Our Customers <https://www.linode.com/2019/12/30/2019-a-year-in-review/> |
16:23 | * | Peng_ tempted to make many bad terms of service jokes |
16:24 | <Peng_> | https://www.juniper.net/us/en/company/case-studies-customer-success/linode/ <- Interesting link in the middle of that post. |
16:24 | <Peng_> | Wait, wasn't Linode a Cisco shop? |
16:24 | <virtual> | I thought so too? |
16:25 | <virtual> | but my knowledge is at least 4 years old. much can happen in that time. |
16:27 | <Peng_> | "Starting today, we’ve reduced our transfer overage price to $0.01." \o/ |
16:27 | <virtual> | ooh, what was it? |
16:28 | <virtual> | oh, 0.02. 50%! :) |
16:30 | <FluffyFoxeh> | don't stretch yourselves too thin, now :) |
16:34 | <virtual> | this is a big discount, for some people, I am sure. |
16:34 | <virtual> | I have never gone over my transfer allotment, but I only do personal site stuff on my Linode. |
16:34 | <FluffyFoxeh> | I was more talking about the new data centres planned |
16:35 | <Peng_> | I almost went over once :D |
16:35 | <virtual> | ahhh, yeah. But they gotta compete against others who are already there. Also, if they can get ahead of the curve, even betterer. |
16:35 | <virtual> | I am glad they're in SYD, but now I have to find budget for another Linode. Budget is super duper tight at the moment, which is frustrating. :P |
16:35 | <linbot> | New news from community: How can I download a directory with many files from my Linode? <https://www.linode.com/community/questions/19270> |
16:37 | <FluffyFoxeh> | just get more money |
16:37 | <virtual> | FluffyFoxeh: working on it ;) |
16:38 | -!- | |GIG [~MYOB@158.115.253.31] has quit [Remote host closed the connection] |
16:38 | -!- | |GIG [~MYOB@158.115.253.31] has joined #linode |
16:38 | -!- | |GIG is "J" on #moocows #linode |
16:48 | <DrJ> | Not that I use excess transfer (Linode is gracious to give me way too much allowance)... but nice to see the lowered price |
16:48 | <DrJ> | you have got to be kidding me though |
16:48 | <DrJ> | "Instantaneous DNS Manager updates" |
16:49 | <DrJ> | I literally just finished moving all my DNS to cloudflare for this exact reason |
16:49 | <DrJ> | by literally I mean last week |
16:50 | <DrJ> | I've begged for "instantaneous" dns updates in here for years... finally bite the bullet and move... and not a week later linode announces they are going to roll it out |
16:51 | <DrJ> | *facepalm* |
16:51 | <Peng_> | Well, they might be aiming to roll it out in December 2020. :P |
16:51 | <Peng_> | and unless they turn off caching, it's still gonna be a *little* slower than Cloudflare. |
16:58 | <millisa> | mumbai, sydney, and toronto all get pretty patch type badges for their datacenters. I demand similar graphics for other datacenters so I can put it on my linode merit badge vest |
16:58 | <millisa> | based on the name . . .they are stickers? https://www.linode.com/wp-content/uploads/2019/12/stickers-collection-e1576718069299-1536x495.png |
16:59 | <Peng_> | You can but stickers for Atlanta from https://www.redbubble.com/shop/dumpster+fire+stickers |
16:59 | <Peng_> | buy* |
16:59 | <virtual> | LOL |
16:59 | <millisa> | !point Peng |
16:59 | <linbot> | millisa: Point given to peng. (46) (Biggest fan: millisa, total: 11) |
16:59 | <virtual> | I think I need to buy some of those and hand them out in the office ;) |
16:59 | <millisa> | (don't see the stickers on the real swag shop) |
17:02 | <millisa> | "Per customer network VLAN" that one is the one I'm most excited about |
17:05 | <grawity> | "that inter-linode networking stuff sounds awesome", I say to myself, while having only one linode total and already planning on getting rid of it as well |
17:06 | <millisa> | Kinda hoping it allows multiple vlans per account so I dont have to create more accounts to keep customers of mine isolated from each other. |
17:07 | <Peng_> | And multiple accounts per VLAN? :D |
17:08 | <millisa> | not sure I'd use that . .but I'm sure there's someone that would |
17:40 | <nbm> | i don't remember who sent me a link explaining how to install void linux with linode |
17:40 | <nbm> | i lost the url |
17:41 | -!- | koenig [~koenig@00029c1b.user.oftc.net] has quit [Quit: WeeChat 2.5] |
17:42 | <millisa> | how long ago? |
17:42 | <virtual> | whoah, this might be old news to some of you - but I just saw a bonus credit in my account! |
17:42 | <virtual> | thank you, Linode! |
17:44 | <nbm> | millisa: not so long ago |
17:45 | <nbm> | maybe 4 months ago? |
17:45 | <millisa> | if it was back in october, i vaguely remember doing one |
17:45 | <nbm> | it was probably yours then |
17:45 | <millisa> | it was mostly based on this https://wiki.voidlinux.org/Install_on_a_VPS and the linode custom distribution guide |
17:47 | <millisa> | these were my notes from then: https://vomitb.in/htbxMQyJHT |
17:47 | <nbm> | yes i think it was yours |
17:48 | <nbm> | because of the custom grub lines :) |
17:48 | <millisa> | those were mostly to get lish working |
17:49 | <nbm> | thank you millisa |
19:06 | -!- | Dataforce` [~dataforce@dataforce.org.uk] has joined #linode |
19:06 | -!- | Dataforce` is "Shane "Dataforce" Mc Cormack" on #linode #bitlbee #oftc #DMDirc |
19:06 | -!- | Dataforce is now known as Guest12744 |
19:06 | -!- | Dataforce` is now known as dataforce |
19:06 | -!- | Cajs [Cajs@185.198.189.47] has quit [Read error: Connection reset by peer] |
19:07 | -!- | Guest12744 [~dataforce@dataforce.org.uk] has quit [Remote host closed the connection] |
19:09 | -!- | phyber_ [phyber@2a02:8011:25:c0de::c0:ffee] has joined #linode |
19:09 | -!- | phyber_ is "People said I was dumb, but I proved them!" on #linode |
19:09 | -!- | Cajs [Cajs@185.198.189.47] has joined #linode |
19:09 | -!- | Cajs is "Cajs" on #linode |
19:09 | -!- | |avril [wha@cute.b.oy.hu] has joined #linode |
19:09 | -!- | |avril is "..." on #tor-bots #moocows #linode |
19:11 | -!- | avril [wha@00028d51.user.oftc.net] has quit [Ping timeout: 480 seconds] |
19:11 | -!- | SpydarOO7 [spydar007@spydar007.user.oftc.net] has quit [Read error: Connection reset by peer] |
19:13 | -!- | rweir- [foobar@quandry.ertius.org] has joined #linode |
19:13 | -!- | rweir- is "no" on #linode |
19:13 | -!- | Kuukunen- [~qq@kuukunen.net] has joined #linode |
19:13 | -!- | Kuukunen- is "qq" on #linode |
19:13 | -!- | Hazelesque_ [~hazel@phobos.hazelesque.uk] has joined #linode |
19:13 | -!- | Hazelesque_ is "Hazel" on #linode #ceph #nlug #bongo |
19:15 | -!- | Spydar007 [spydar007@spydar007.com] has joined #linode |
19:15 | -!- | Spydar007 is "Spydar007" on @#Mikaela @#kovri #virt #supybot #spi #redditprivacy #perl #oftc #moocows #mastodon-administration #linux #linode @#kovri-dev #friendica #debian |
19:15 | -!- | tomaw_ [tom@tomaw.chair.oftc.net] has joined #linode |
19:15 | -!- | tomaw_ is "Tom Wesley <tom@tomaw.net>" on #debian-es #debian-django #debian #ceph #oftc #moocows #linux #virt #freenode #debian-ipv6 #bitlbee #linode #irssi #help |
19:15 | -!- | Roedy [Roedy@0002706e.user.oftc.net] has joined #linode |
19:15 | -!- | Roedy is "Roedy" on #openvas #linode #debian #OpenBSD |
19:17 | -!- | jhq [~jasper@shell.jhq.io] has joined #linode |
19:17 | -!- | jhq is "Jasper Backer (jhq)" on #virt #linode #debian #ovirt #debian-next |
19:17 | -!- | fifr_ [~fifr@math-pc1.mathematik.uni-kassel.de] has joined #linode |
19:17 | -!- | fifr_ is "Frank Fischer" on #linode |
19:17 | -!- | Netsplit charon.oftc.net <-> liquid.oftc.net quits: phyber, tomaw, Hazelesque, tomchen[m], Turl, lpalgarvio[m], frailty[m], fifr, DennyFuchs[m], Roedy-, (+13 more, use /NETSPLIT to show all of them) |
19:17 | -!- | phyber_ is now known as phyber |
19:18 | -!- | TonyL [~Tony@zeus.dedi.cakeforce.uk] has joined #linode |
19:18 | -!- | TonyL is "Tony L" on #oftc #linode #debian |
19:18 | -!- | Netsplit over, joins: dueyfinster |
19:19 | -!- | Netsplit over, joins: hawk |
19:20 | -!- | Tol1 [tol1@nokkala.info] has joined #linode |
19:20 | -!- | Tol1 is "Tomi Nokkala" on #linode |
19:26 | -!- | Netsplit over, joins: grawity |
19:38 | <dwfreed> | grawity: I mean, you need a grub.cfg, unless you like telling grub how to boot your Linode manually |
19:41 | -!- | Turl [~Turl@yotta.elopez.com.ar] has joined #linode |
19:41 | -!- | Turl is "Turl" on #ros #opensde #linode #kernelnewbies |
19:44 | <dwfreed> | Peng_: virtual: yes, Linode mentioned in the NextGen Network blog post that the entirety was built on Cisco Nexus hardware |
19:46 | <Peng_> | Juniper's case study kind of implies that Linode threw all their Cisco gear in the dumpster and went Juniper/Corero the moment after the Christmas DDoS. |
19:47 | <dwfreed> | https://twitter.com/linode/status/581100233058312192 |
19:50 | <dwfreed> | Peng_: ^ considering the date on that tweet is 9 months before, that'd be a very expensive dumpster... |
19:51 | <dwfreed> | an ASR 9006 chassis *with nothing in it* is $7,000 |
19:51 | <dwfreed> | and that's on sale! |
19:52 | <dwfreed> | the devices pictured are probably in the neighborhood of 50k a piece |
19:53 | <dwfreed> | now multiply that times 2, per datacenter |
19:53 | <dwfreed> | No, I doubt they got rid of all their cisco equipment, though those ASRs may be the only thing they kept |
19:54 | <virtual> | I thought it might be the other way around, they keep the nexuses, but not the ASRs, considering it's about DDoS, their juniper thing? |
19:55 | <virtual> | I didn't look ta the PDF, so... |
19:56 | <dwfreed> | the ASRs can handle multiple Tbps, far more capacity than Linode will have in the near future |
19:56 | <virtual> | sure, I was thinking about the ddos biz. never heard of corewhatever. |
19:56 | <dwfreed> | the issue is not the border, but the other edge |
19:56 | <virtual> | not sure if they only work with juniper routers, or something. unlikely, I'm sure. |
19:57 | <virtual> | oh? the customers...? |
19:57 | <dwfreed> | yes |
19:57 | <dwfreed> | and the hosts |
19:57 | <virtual> | never really considered that - I thought all the DDoS were inbound to Linode. |
19:57 | <dwfreed> | oh, they are |
19:57 | <virtual> | were they outbound from compromised hosts or something too? |
19:57 | <dwfreed> | some are, but those aren't nearly as much of an issue |
19:58 | <dwfreed> | the pipe coming into the datacenter is 200 Gbps+ |
19:58 | <Peng_> | virtual: Looking at the PDF doesn't help much, since it's green and grey text on a white background. Or white text on a green background. |
19:58 | <virtual> | nice, made for readability then |
19:58 | <dwfreed> | but the pipe to an individual host is probably 40 Gbps |
19:59 | <Peng_> | Actually, wait, the PDF gives a date |
19:59 | <virtual> | surely you still want to block it at the border - as close to your source as possible (and then get your provider to block it too). |
19:59 | <dwfreed> | when you're getting an 80 Gbps DDoS, the border doesn't notice, but the host does |
19:59 | <virtual> | provider(s) |
19:59 | <dwfreed> | sure, which is why you just BGP the target IP on the ASR to the DoS mitigation box |
19:59 | <Peng_> | "Spataro discovered a solution at Juniper’s 2018 NXTWORK conference. The addition of Corero SmartWall Threat Defense Director, in combination with the MX960 routing platforms that Linode already used, would provide real-time DDoS mitigation." |
20:00 | <virtual> | wonder why juniper then, and not a cisco ddos device (I'm assuming one exists) |
20:00 | <virtual> | oh, they already use mx960s? wtf are the ASRs for?? |
20:00 | <Peng_> | virtual: Three and a half years between the ASRs and the MX960s |
20:01 | <dwfreed> | so they may have switched to MX960s after a few years |
20:01 | <dwfreed> | I'm just on the outside these days, no idea |
20:01 | <virtual> | aren't they approximately similar devices? |
20:01 | <dwfreed> | yes |
20:02 | <dwfreed> | NXTWORK 2018 was early October 2018 |
20:05 | <virtual> | I haven't had to buy any new gear since 2014. or was it 2013. now I'm just a user. :P |
20:05 | -!- | nate [NBishop@00013625.user.oftc.net] has quit [Quit: (?°?°)?? ???] |
20:11 | -!- | nate [NBishop@d-207-255-41-254.paw.cpe.atlanticbb.net] has joined #linode |
20:11 | -!- | nate is "Nathan" on #linode #php |
20:12 | <millisa> | i bought new gear today... hadnt updated my primary workstation in 4 years. this was going to be my second gen wall mount/art display type system. |
20:13 | <millisa> | had just finished mounting the motherboard and gigantic heatsink and was going to finish tightening the recessed screws. turned my back and heard a terrible crash. |
20:13 | <virtual> | no! |
20:13 | <millisa> | i guess i had just barely balanced it on the desk and the whole thing, keeled over, landing on the heatsink. |
20:13 | <virtual> | :( |
20:14 | <millisa> | it's now got a nice 15 degree tilt to it. still boots. so i've got that going for me |
20:14 | <virtual> | hmm |
20:14 | <virtual> | I guess tilted is still art. |
21:04 | -!- | Parinioa [~oftc-webi@2606-a000-1319-40d3-60bc-74ca-1a54-ef27.inf6.spectrum.com] has joined #linode |
21:04 | -!- | Parinioa is "OFTC WebIRC Client" on #linode |
21:09 | -!- | |GIG-1 [~MYOB@158.115.253.31] has joined #linode |
21:09 | -!- | |GIG-1 is "J" on #linode #moocows |
21:09 | -!- | |GIG-1 [~MYOB@158.115.253.31] has quit [Remote host closed the connection] |
21:16 | <DrJ> | ty Linode for a great decade! Been a customer since 2010... ready for another full decade :) |
21:20 | <@jyoo> | Happy to have you! :) |
21:22 | <Parinioa> | It seems there was an issue in Atlanta about an hour ago, resulting in a system reboot. Where can we get more info on this than the generic support ticket gave? |
21:23 | <millisa> | if it impacts multiple people, they usually put something up on the status page. |
21:23 | <@jyoo> | I'm taking a look at Atlanta now, please hold |
21:23 | <millisa> | you can reply to the ticket they generated and ask for more details. or give 'em a call |
21:24 | <@jackley> | if it resulted in a reboot, it sounds like a hardware issue |
21:24 | <@jackley> | which isn't something we'd normally put on the status page |
21:25 | <@jackley> | Parinioa: is your host h689-atl1? |
21:25 | <@jyoo> | Parinioa: A host in Atlanta was down for emergency maintenance about an hour ago due to a software issue. The host stopped responding and required a reboot to bring it back up. |
21:25 | <Parinioa> | The ticket just referred to "an issue affecting the physical host" |
21:28 | <millisa> | (where does the new manager show the host system's name?) |
21:29 | <Parinioa> | I was about to ask the same question. I can't seem to find it |
21:29 | <millisa> | in the old manager its on the linode details page |
21:30 | <millisa> | in that right hand sidebar, down near the bottom |
21:30 | <@jackley> | it's only in the classic manager, it doesn't exist in Cloud (I asked that b/c I thought you might be using classic) |
21:31 | <Parinioa> | I'm happy to use classic, but that option doesn't seem to be on the login screen anymore |
21:31 | <millisa> | manager.linode.com |
21:31 | <@jackley> | ya – but we only had an issue with a single host in ATL in the last hour |
21:32 | <@jackley> | so I guess there's no need to confirm |
21:32 | <Parinioa> | It is currently on h689-atl1 (KVM) |
21:32 | <@jyoo> | Yep, that one had an RCU stall that caused it to go down briefly |
21:33 | <Parinioa> | That helps, thanks. |
21:35 | <Parinioa> | Though it makes me worry that loosing the classic manager also means we're loosing some functionality. Are their plans to impliment all the info in the classic manager into the cloud manager eventually? |
21:37 | <@jackley> | ya we're aiming to have parity for most things in the cloud manager (we just added longview, which was one of the last big features we needed) |
21:39 | <@jackley> | I don't believe we're planning on adding host info to the API, though, so that will likely not be there by the time the classic manager is EOL'd |
21:39 | <Parinioa> | That will make answering your earlier question quite a bit harder. |
21:41 | <@jackley> | true! but I also could have provided an answer without having to ask that |
21:42 | <Parinioa> | No doubt. I"m just a grumpy old man that doesn't like to see things change when they were working for me as is. |
21:42 | <@jackley> | heh I can empathize with that |
21:45 | <LouWestin> | I was just reading Linode’s blog recap for the year. I didn’t know Linode had added DDOS protection. Least not general blanket protection |
21:47 | <virtual> | I think it's probably in Linode's best interests to do this too - those DDoS's impacted more than just the targetted customers of Linode - if they were targeting those and not Linode infrastructure. |
21:47 | <virtual> | Depressing that it's even a thing that has to exist though. |
21:48 | <LouWestin> | I believe it was about 2 years ago when Linode got hit really hard for two weeks . Atlanta was probably hit the worst |
21:49 | <millisa> | 4 years ago |
21:49 | <millisa> | xmas of 2015 |
21:49 | <millisa> | https://www.linode.com/2016/01/29/christmas-ddos-retrospective/ |
21:49 | <Peng_> | Time flies when you're not down :D |
21:50 | <LouWestin> | Essh, indeed! |
21:51 | <LouWestin> | That means I’ve been a linode customer for...5 years at least |
21:52 | <LouWestin> | Probably 6. Too lazy to login to my account right now |
21:53 | <@jackley> | feels like it was so long ago |
21:55 | <LouWestin> | I’m a bad judge of age. lol hence being off by two years. |
22:24 | -!- | koenig [~koenig@00029c1b.user.oftc.net] has joined #linode |
22:24 | -!- | koenig is "koenig" on #linode |
23:02 | <Nightmare> | I remember that blog post after Linode HQ got swatted, lol |
--- | Log | closed Tue Dec 31 00:00:48 2019 |