Nano node V21.3 has been deployed to the nano network and is now available.
For details on the update, check the announcement below.
At upper number the network probably can't keep all nodes synced, correct?
What is that number.
update there's been a few good answers but many more hostile reactions. yes I'm already aware it decentralized, that does bring some network communication issues at high enough scales.
This is a thought experiment on how large the number of nodes would be before it would be a problem. Don't insist so much that's irrelevant
Can this be done on a typical PC? Does it require being live 24/7? Are there steps available on how to set it up? Any info is appreciated.
I'm curious if I can increase the amount of peers I connect too, or become a better node by opening certain ports or something like that, maybe it is possible to also seed it via Tor and the clearnet simultaneously?
I'm currently getting 1300 H/s on my super cheap crappy desktop. I hope it being a full node and miner helps out the network. I'm sure I won't ever mine a block, but it says it's a 1/2000 chance so that's not too shabby I guess.
Does encryption protect against this? BitTorrent can be throttled by ISPs this way no?
This is a question I find myself pondering but I don’t know enough of the technicals to know the answer.
To clarify, I’m not talking about for a specific user - I’m talking network wide. Spanning a country or even the entire internet. If the traffic between nodes could be blocked or slowed down, could this be an effective attack vector to cripple or slowdown Bitcoin transactions?
I recently ran into a big coin drop and have some crypto that needs investing. I'm very tech savvy and have always wanted to set up a provider/validator node for some crypto network, but until recently haven't had enough for the minimum stakes. I've looked into many of the options on stakingrewards.com, but I need your help to make up my mind.
Preference will be given to networks with lower minimum stakes.
This may totally be a dumb question but I’m going to ask anyways!
Since all the eth2 nodes are trusting some sort of global ntp server, what could happen if the NTP server was hacked to send bogus data?
recently, nodes on node.moneroworld.com were getting slammed with 10k+ rpc connections.
i wrote a horrible bash script to block them.
i know, i know. the array is stupid. I thought it was storing the num_cnxns and IP address as a space delimited string, but it split it into two variables and I just didn't feel like doing it right.
> OVHcloud's SBG2 data center in Strasbourg has been destroyed by a fire which also damaged SBG1. No one was hurt in the fire, but all four data centers (including SBG3 and SBG4) on the site will be closed today. Source
Just a friendly reminder of why it's important that nodes be spread across different cloud providers and across geographies. Remember: nothing is too big to fail...not even AWS, and especially not BTC ;)
I hope anyone's node running on OVH is safe :)
Satoshi Nakamoto, November 03, 2008:
>Long before the network gets anywhere near as large as that, it would be safe for users to use Simplified Payment Verification (section 8) to check for double spending, which only requires having the chain of block headers, or about 12KB per day. Only people trying to create new coins would need to run network nodes. At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. A server farm would only need to have one node on the network and the rest of the LAN connects with that one node.
>The bandwidth might not be as prohibitive as you think. A typical transaction would be about 400 bytes (ECC is nicely compact). Each transaction has to be broadcast twice, so lets say 1KB per transaction. Visa processed 37 billion transactions in FY2008, or an average of 100 million transactions per day. That many transactions would take 100GB of bandwidth, or the size of 12 DVD or 2 HD quality movies, or about $18 worth of bandwidth at current prices.
>If the network were to get that big, it would take several years, and by then, sending 2 HD movies over the Internet would probably not seem like a big deal.
Make sure you have a version of bitcoind that supports the following entries and put them in your bitcoin.conf file, and then restart bitcoind:
bitcoind will generate the compact filters ( blockfilterindex ) and start serving them up via P2P ( peerblockfilters ). The filters currently take up about 6.2GB of disk space.
You can check to make sure your node is serving compact filters using bitcoin-cli or by using getnetworkinfo in the console.
You should now see "COMPACT_FILTERS" in the output :
$ bitcoin-cli getnetworkinfo | jq .localservicesnames "NETWORK" "WITNESS" "COMPACT_FILTERS" "NETWORK_LIMITED"
Lightning neutrinos will thank you. :)
EDIT: 118 full nodes so far, pamp it! https://bitnodes.io/nodes/?q=node_compact_filters
Of all these implementations, how many of each can I run on I run on a single full node? Or can I only run one of each? Also, I have a single public IP address for the full node, would I need multiple public ip addresses? The implementations I’ve seen so far are LND, eclair, lit, c-lightning, lightning-onion, and ptarmigan. Let me know of any others as well.
So I had this thought, there aren't that many VPS providers out there and most of them don't really respects privacy, so let's assume they decide to collaborate to deannonymise Tor. So we might as well assume that all VPS are provided by the same provider.
So Tor's safety is under the assumption that no one has 50% of the nodes, the more a single entity controls the better chances it will have to break Tor. But VPSs aren't really controlled by the person who settled them up but rather by the provider, so by setting more and more VPSs, we actually centralise more and more the Network, which is bad.
Even if we trust the provider, by centralising our nodes in a few providers, they become juicy targets for those who want to break Tor.
Examples of Obyte open-source repositories https://github.com/byteball
There are different types of nodes on Obyte, but mainly they are either full node (stores all data) or light node (only stores data relevant to your keys). They all use the core networking and consensus library called ocore.
There are 3 types of nodes that have to be full nodes, these are obyte-witness, obyte-relay, obyte-hub. obyte-witness has to be hidden behind TOR, but obyte-relay and obyte-hub can be either hidden or public and accept incoming connections. obyte-relay is probably easiest way how to contribute to the decentralization of Obyte by having a copy of Obyte DAG. Only obyte-witness holds private keys because it inherits the headless-obyte features to make transactions.
There are 2 types of wallet nodes: headless-obyte and obyte-gui-wallet. These are nodes that hold private keys and can send (text or object) messages between each other for free through obyte-hub - this is how chatbots work. They also can have HD (Hierarchical Deterministic) wallets, which means you can have multiple accounts and addresses, make payments or post any other data to Obyte DAG for a predictable low fee (depend on the size of the data). Both of these can be either full node or light node, but light nodes rely on the availability of obyte-relay or obyte-hub. Full nodes can post their payments or data directly (no gatekeepers) to Obyte DAG and they only need some obyte-relay or obyte-hub to discover initial peers.
Last type of node is called bot-example on the above image, but it can really be any project written using Obyte wallet or messaging features. It can also be full node or light node. This is a project you should use when you want to build your own project on Obyte https://github.com/byteball/bot-example
Would be possible in future to use a Raspberry Pi or a VPS in order to run a Pi Network Node? Don't you think having just Windows and Mac desktop apps is limiting?
This is happening on all of my Apple devices, Apple TVs, Ipads, Laptops, and Desktops. I was reading an old post about 2 years old, but have not seen anything since. I called Apple and they told me to replace my google WiFI router. I know I am not the only user in the world using Google WiFI and Apple so I know that is not the problem.
Any thoughts, recommendations would be greatly appreciated. All devices are running the current versions of their respective OS.
Processing img iuasfkbvg2n61...
All decentralized networks, including blockchains and other P2P systems face the technical problem of how to gather metrics and statistics from nodes run by many different parties. Achieving this isn’t exactly trivial, and there are no established best practices.
We faced this problem ourselves while building the Streamr Network, and actually ended up using the Network itself to solve it! As collecting metrics is a common need in the cryptosphere, in this blog I will outline the problem as well as describe the practical solution we ended up with, hoping it will help other dev teams in the space.
Getting detailed real-time information about the state of nodes in your network is incredibly useful. It allows developers to detect and diagnose problems, and helps publicly showcase what’s going on in your network by building network explorers, status pages and the like. In typical blockchain networks, you can of course listen in on the broadcasted transactions to build block explorers and other views of the ledger itself, but getting more fine-grained and lower-level data – like CPU and memory consumption of nodes, disk and network i/o, number of peer connections and error counts etc – needs a separate solution.
One simple approach is that the dev team sets up an HTTP server with an endpoint for receiving data from nodes. The address of this endpoint is then hard-coded to the node implementation, and the nodes are programmed to regularly submit metrics to this endpoint. However, authentication can’t really be used here, because decentralized networks are open and permissionless, and you won’t know who will be running nodes in order to distribute credentials to those parties. Exposing an endpoint to which anyone can write data is a bad idea, because it’s very vulnerable to abuse, spoofing of information, and DDoS attacks.
Another approach is to have each node store metrics data locally and expose it publicly via a read-only API. Then, a separate aggregator script run by the dev team can connect to each node and query the information to get a picture of the whole network. However, this won’t really work if the nodes are behind firewalls, which is usually the case. The solution also scales badly, because in large networks with thousands o... keep reading on reddit ➡