Figured I would post this here too. Not a bad read, and if true, pretty darn impressive network for Project L to be running on.
Question 1 . There were times when the MPLS core use to constructed with P-P SDH links . The P routers used to be connected with STM 16/32 links . We used to build MPLS over these links .
Fast forward to 2021 when most of the STM / DWDM equipment are getting phased out what will the underlying layer 1 tech of the future ?. How are SP upgrading their underlay in 2021.
Question 2 . If I go out and would like to procure a point to point ethernet link what are the options . I don't want have a Layer 2 VPN circuit mimicking an Ethernet pipe.
Any info would be of great help Thanks.
Nokia selected by Capital Online to upgrade its IP backbone network
With its leading high-density, high-capacity Service Router, Nokia will help Capital Online build a cost-effective converged backbone network and realize its Network 2.0 plan The upgrade will enable Capital Online to provide reliable cloud services with better customer experience End-to-end support for advanced IP protocols and QoS will allow Capital Online to meet varied customer needs with efficient service delivery and assurance 10 January 2022
Espoo, Finland – Nokia today announced it has been chosen by Capital Online, a cloud computing service provider headquartered in Beijing, to upgrade its IP backbone network to complete the company’s Network 2.0 plan. This will enable Capital Online to provide reliable cloud services while improving the customer experience.
Under the agreement, Capital Online will deploy the Nokia 7750 Service Router (SR) and 7250 IXR interconnect router platforms to specifically support Capital Online’s ambitious Network 2.0 plan. This entails that the converged backbone provides performance certainty for all traffic flows under all network conditions, the versatility to converge edge and core routing functions onto a common platform, smart traffic engineering with segment routing-MPLS, and granular QoS to address different traffic demands for reliable service delivery.
The Nokia 7750 SR and 7250 IXR platforms offer unique advantages for building Capital Online’s backbone network. Based on the industry-leading Service Router Operating System (SR OS), Nokia’s portfolio provides end-to-end advanced IP routing protocols, including Segment Routing, to achieve fast and efficient service delivery along with end-to-end service assurance for all traffic flows. By building a cost-effective converged backbone network based on high-density routers having 100GE/400GE interfaces, Nokia’s platforms are geared to help Capital Online succeed in its cloud business.
Xu Xiaohu, Chief Architect of Capital Online, said: “As a trusted partner of critical networks, Nokia has abundant experiences helping its global customers build high capacity and quality IP networks. We’re looking forward to collaborating with Nokia to accelerate the transformation of our network to provide faster, more reliable network services for our global customers throughout the U.S., Europe and Asia.”
Markus Borchert, CEO of Nokia Shanghai Bell, said: “Capital Online is known for being spe... keep reading on reddit ➡
It’s for a small but growing firm. They don’t have a ton of cash, I don’t have an estimated budget… I thought either a single expensive dell or hpe with expensive drives and ram and an expensive right now warranty to mitigate the single pout of failure.
The other option was clustering with a proxmox HA Cluster(have only done single proxmox, but vsphere clusters)… I thought maybe 3 asrock ryzen boards with ecc ram and dual 10gbe nics and a good switch with either cephfs or zfs and gluster which I don’t know and am not comfortable with…. I don’t know cephfs either but heard it’s well documented.
What is a good reliable 10-40gbe switch? I know new nas drives are a must and it’s hard to go up on size.
The other option is a bunch of less expensive Xeon v3 supermicro boards and gbe but I don’t know if that’s too slow for cephfs…
I haven’t used freenas in any kind of redundant setup but I’d imagine if I had an instance on each machine with hba pass through and disk pass through I could cluster that…. I’m also going to mess around with PFsense clustering.
I know this is all over the place bc I go from one hpe to a cluster of asrack ryzens so I appreciate the feedback.
Is PFsense what I want? I like proxmox and freenas. I’m used to scale from large companies but this is only a few employees I’m just not sure I feel comfortable with a single point of failure.
Any shoutouts for good solid 10-40gbe (understanding is 2 ports per machine for network and storage so maybe 10-16 port if the whole office is 10gbe)? The price for switches are all over the place. I wish this could be redundant too. I’m so used to scale.
I mean the smallest hpe would probably do it. Just they have video sometimes.
(1/11) It has always been evident the limitations of $ETH in its original form. It can’t scale, so when volume (i.e. traffic) substantially increases it becomes unusable for small users due to high fee’s promoting inequitable opportunity, which is against the #ETHOS of #DeFi & #BlockChain limiting Inclusiveness. $ETH also screeched to a halt due to technological limitations, making it unusable at times (i.e. Crypto Kitties). Both of these center around scalability. Introduce a commit chain $MATIC termed by @finematics which allows for scalability but introduces network security risk complications but is superior to a traditional side chain. $MATIC leveraged the large network (Users & liqudiity) of $ETH & why I purchased at $.07 to implement their technology allowing for exponential growth. $MATIC won over most projects in the space due to their superior customer service & onboarding of projects onto the network. I have been using $AAVE on the $MATIC network for months now. The fee’s are amazing but when traffic... keep reading on reddit ➡
Hello, If there any devs for LBRY on here can you comment on the viability of using Avado (a group of specialized hardware for node running) to help supplement the LBRY network?
Easisest for any project is to build the Package themselves by using the AVADO SDK: https://github.com/AvadoDServer/AVADOSDK
Once they have the IPFS Hash we will take a look and add it to the dappstore. AVADO and its ressources should not hold back any project to be added.
#Happy Wednesday, Barkada --
#The PSE lost 44 points to 6992 ▼0.6%
Thanks to elginshire and Robin for the meme love, to /u/renrengabas for the congrats, to /u/PHValueInvestor for the great back-and-forth on company disclosures and suspensions, and to Jing for pointing out the great performance of AyalaLand Logistics [ALLHC] since I began ranting and raving about the logistics sector in the Philippines. To be fair, though, I've been pretty hard on ALLHC for being still too much of a generalist (electricity sales, industrial lot sales?) and not a pure-play on warehousing, shipping, and the value-added services that come with that ecosystem. I'm all about the pure-plays!
Shout-outs to elginshire, @shutangnajuice, Stephen Chiong, Corgi Buttowski, ★ PEDRO the 3 ★, @JaviPottamus, Chip Sillesa, Lance Nazal, @PH7641, and Jing for the retweets, and to Mike Ting, Rumaras Musen, and Froilan Ramos for the FB shares.
Read below for more detail and analysis!
#▌Top 3 MB indices: Connectivity ▲1.13% Cement ▲0.82% Power Gen. ▲0.46%
#▌Bottom 3 MB indices: #COVID-19 ▼1.21% POGO Prop. ▼1.07% Fast Food ▼1.02%
#▌Main stories covered:
>- [NEWS] Converge [CNVRG 22.75 ▲0.89%] reveals plans to double transmission capacity of “metro backbone”... the country’s fastest growing broadband provider revealed its plans to double the transmission capacity of its metro backbone line, from 400 Gigabits per second (Gbps) to 800 Gbps. CNVRG said that it was doing this in anticipation of the future demand caused by “hyperscale capacity applications” (smile and nod), the “Internet of Things” (smart appliances, etc), and “smart cities” (cool, cool). The... keep reading on reddit ➡
Why YSK: We all know repairs cost a ton of money annually for simple things like changing wifi passwords, cut lines, and so on, but those are about the most common repairs for these technicians. They're trained to work on the construction aspect of these jobs such as running lines, terminating connections, and doing basic troubleshooting on the hardware provided by the company.
They are not trained to troubleshoot your home server rack, or to figure out why at 3am your upload speeds decide to die randomly. Most of them have no idea what kind of evil bandwidth limiting thing the ISP is doing and they won't have an answer to why X site is always slower than Y site. They are only trained to install and repair the services by the company. Expecting anything beyond this will cost you hundreds in continually calling in technicians who haven't been trained on these issues.
You've probably been in situations where you've called a cable company three or four times to solve an issue, to only get half fixes until some really seasoned Technician shows up and fixes it near instantly, or finds an issue outside your home that has nothing to do with your services. This is a limitation of the job title and the training most people who install cable receive.
For fiber customers the very most a technician will receive is a device that has a microscope to see how dirty fiber connections are, and a meter that shows how much "power" your signal has. They don't have any other tools like a TDR (which can diagnose things beyond signal loss) or a working knowledge of where faults may occur within the network.
Moreover, the ways certian hardware and software manage bandwidth, the placement of the antenna in the wireless device you're using, or even the type of cables running in your home and the length of those cables can affect that little number on speedtest.net. Most techs aren't trained to know when and where these issues arise and that's not a failure of the technician but rather just a limitation of their job title and the training that comes along with it.
There's also a high turnover rate for cable installers, specifically fiber install and repair techs. Their skills are in high demand and the work is often times unforgiving and brutal. There aren't many who don't quit for higher pay, so a lot of the folks who do this work have only done it for a year or so. They only know what they're trained to know, and that's the installation of these specific services and basic r... keep reading on reddit ➡
I am not an expert on image segmentation, so I need some advices.
Currently, we have a compact network for segmentation with its encoder based on EfficientNet-b0. But my boss asked me to scale it up to enlarge the capacity of the network to "achieve the best mIOU and accuries as possible" (The motivation is the larger the network capacity, potentially the better performance).
What typical networks (backbones) are used for a successful segmentation task in these days? I am not sure where I should start. Maybe to try other version of EfficientNet like b5? or some other mainstream architectures? Or increase the input resolution while keeping output resolution unchanged?
☑ RELATED POSTS:
➔ NOTICE: I waited for Satisfactory Game Update 4 for Early Release to stabilize, and for "key Game Mods" to be updated before continuing my Satisfactory Gaming "Adventure". Since I will soon once again continue my work on the Planet called MASSAGE-2(A-B)b in the binary star system of Akycha, I decided to post my experience in Update 3 in the hopes that my previous work might inspire a Pioneer working on their own "Adventure".
And so... let's begin...
A. Brief History:
As I stated in Part 1 - Brief History, I had continued to expand Alpha Base working my way up through the completion of Tier 7 (Tier 8 was not available at that time). Upon reaching Tier 7, I initially was going to start building a "Mega Factory" in the Islands Biome (AKA "The Gold Coast"), and began work on my "Beta Base". This lead to my construction of my Geothermal Power Network (GPN) that eventually connected all 18 Geothermal Nodes. Shortly after partial completion of the GPN (Sites #2 thru #10) came the announcement of the release date for Update 4 coming to Satisfactory Experimental Version (with Update 4 for Early Release expected not far after that) which resulted in my putting my "Mega Factory" ideas on hold.
AND SO... I switched tactics, and began working on "Update 4 prep work", and due to the fact that initially all Game Mods would be lost for a while, this lead to the eventual construction of a Planetary Hyper Tube Network following the former GPN which I now began calling Planetary Power Backbone (PPB).
B. Key Mods I Used:
Hi everyone, I trained a classification model using the timm library and EfficientNetV2s. Is it possible to use that as a backbone for Faster RCNN using Pytorch and is there any library that can help me with that
I have tried using EfficientDet by Ross as well but I am looking at Faster RCNN approach for comparison purpose
Finished setting up the new network back-end, thought I'd share with the community!
Protectli Vault 4 Port running pfSense 2.4 (Amazon link) - Router/firewall
TP-Link Omada EAP245 WAP (Amazon link) - AP (obviously)
TP-Link PoE Injector - Comes with the EAP245
TP-Link 8 Port Gigabit Switch (Amazon link) - Was going to go with a 48 port Cisco, but it was too big for the panel :)
Looking forward to the comments/critics. Thanks!
I don’t see too many large (ish) Ubiquiti installs we love UniFi and I wanted to share what we did this weekend.
We used 16XG’s to connect a 15+ UniFi 48 port 750w switches. We are aggregating all uplinks and down links. All the XG’s connect to some dell S4128F-ON switches with distributed port groups over a 200Gb link. I wish UniFi gear should do that.
We used 16XG’s as our top of rack switches with the ubiquiti 10Gb SFPs in our Dell switch’s along with Cat 6A. Creating 4 port ag groups on the base-t 16XG ports. Then we did 2 port ag to our 48 port switch’s.
We also have ATS PDU’s to allow UPS failover for single power supply switch’s.
You may notice there are some ubiquiti 54v power supplies that will connect to Fiber PoE devices that are 1000+ feet away to power and network outdoor cameras and AP’s.
From the FCC technical attachment in SpaceX's proposal: "The system will also employ optical inter-satellite links for seamless network management and continuity of service..."
It is mentioned in an article here, that Equinix plans to use 200Gbps optical crosslinks between satellites using tech developed by Laser Light Communications.
Current undersea cables range in bandwidth from 320Gbps-40Tbps. Assuming for the moment that SpaceX would be using something at least as capable as the solution from Laser Light Communications, 200 satellites aggregate bandwidth (say 200 over the US -> 200 over Europe) could approach 4Tbps in bandwidth - a competitive level.
I haven't seen this anywhere else, but SpaceX could make money from transit agreements shuttling this traffic over its unused bandwidth. Unfortunately, I have no insight into how lucrative these transit agreements are.
Beyond that, there is another thought that I had about this network that has far-reaching political implication. It could, conceivably, bypass many nations' internet firewalls. For example, a remote user in China with a transceiver on their roof could connect directly to the satellite network and have their traffic shuttled between satellites in space (outside the control of the national government) to the nearest terrestrial transceiver of the requested site. If the nearest terrestrial site to the requested server is outside of the end-user's home nation, the home nation's firewall has been bypassed.
Although I am a supporter of a free, uncensored internet, and I would like for this to come to pass, I anticipate that this might become a major hurdle for this network to overcome.
Edit: 2016-11-19 14:52 EST - Thanks to /u/TootZoot for this video of Elon at the SpaceX Seattle campus launch. There are quite a few design details from Elon that shed some light on this endeavor:
"rebuilding the internet, in space"
"majority of long distance internet traffic over this network" - So, Elon has already thought of this. I swear, this man is always two steps ahead.
"about 10% of local consumer and business traffic, so that's, still 90% of local access will still come from fiber"
"The speed of light in a vacuum is 40-50 percent faster than in fiber, so y... keep reading on reddit ➡
Training Problems for a RPN
I am trying to train a network for *region proposals* as in the anchor box-concept
from *Faster R-CNN*.
I am using a pretrained *Resnet 101* backbone with three layers popped off. The popped off
layers are the `conv5_x layer, average pooling layer`, and `softmax layer`.
As a result my convolutional feature map fed to the RPN heads for images
of size 600*600 results is of spatial resolution 37 by 37 with 1024 channels.
I have set the gradients of only block conv4_x to be trainable.
From there I am using the torchvision.models.detection rpn code to use the
rpn.AnchorGenerator, rpn.RPNHead, and ultimately rpn.RegionProposalNetwork classes.
There are two losses that are returned by the call to forward, the objectness loss,
and the regression loss.
> The issue I am having is that my model is training very, very slowly. In Girschick's original paper he says he trains over 80K minibatches (roughly 8 epochs since the Pascal VOC 2012 dataset has about 11000 images), where each mini batch is a single image with 256 anchor boxes, but my network from epoch to epoch improves its loss VERY SLOWLY, and I am training for 30 + epochs.
My code can be viewed at this pytorch discussion link here:
I am considering trying the following ideas to fix the network training very slowly:
- trying various learning rates (although I have already tried 0.01, 0.001, 0.003 with similar results
- various batch sizes (so far the best results have been batches of 4 (4 images * 256 anchors per image)
- freezing more/less layers of the Resnet-101 backbone
- using a different optimizer altogether
- different weightings of the loss function
Any hints or things obviously wrong with my approach MUCH APPRECIATED. I would be happy to give any more information to anyone who can help.