public inbox for nncp-devel@lists.stargrave.org
Atom feed
From: Sergey Matveev <stargrave@stargrave•org>
To: nncp-devel@lists.cypherpunks.ru
Subject: Re: Assorted NNCP questions
Date: Sun, 27 Dec 2020 12:53:51 +0300 [thread overview]
Message-ID: <X+hZr0GdT8OMlAqB@stargrave.org> (raw)
In-Reply-To: <87eejbq2gh.fsf@complete.org>
[-- Attachment #1: Type: text/plain, Size: 10963 bytes --]
Greetings!
*** John Goerzen [2020-12-26 22:48]:
>First, is nncp-toss multithreaded? If so, would it be possible to have an
>option forcing it to run requests sequentially?
Packets processing is done intentionally sequentially. With fast
hash/encryption (BLAKE2b/ChaCha20-Poly1305) algorithms it should not
have CPU as a bottleneck, but HDD, which obviously works with maximal
performance on sequential loads. But packet processing itself uses
separate goroutine for decryption and separate for decompression (which
is used for "command" packets created with nncp-exec) -- so here it can
occupy more than two CPU/cores. You can limit it by setting GOMAXPROCS=1
environment variable when running any Go program.
>But the onlinedeadline isn't being respected in the run by nncp-caller; it
>still disconnects after 10 seconds, so this results in a new connection being
>established every minute. I also tried definine onlinedeadline at the parent
>(neighbor) level, rather than within the calls structure, but that didn't
>help either.
onlinedeadline option must be "in sync" on both nodes, because there is
no agreement made on it inside the online protocol itself. But anyway
currently I do not understand why it is not working properly for you. I
remember that there were problems some time ago, but I thought they were
fixed. My "upstream" node (my gateway and mail server) has only
onlinedeadline option for my "laptop":
stargrave.org: {
id: [...]
exchpub: [...]
signpub: [...]
noisepub: [...]
freq: {
path: /storage
chunked: 524288
}
incoming: /storage/incoming
exec: {
sendmail: ["/usr/sbin/sendmail"]
}
onlinedeadline: 3600
}
and my laptop's configuration for that gateway/upstream node is:
gw: {
id: [...]
exchpub: [...]
signpub: [...]
noisepub: [...]
exec: {
sendmail: ["/usr/sbin/sendmail"]
}
incoming: /home/stargrave/incoming
freq: {
chunked: 524288
}
calls: [
{
cron: "*/10 9-21 * * MON-FRI"
nice: PRIORITY
rxrate: 1
},
{
cron: "*/1 21-23,0-9 * * MON-FRI"
onlinedeadline: 3600
addr: lan
},
{
cron: "*/1 * * * SAT,SUN"
onlinedeadline: 3600
addr: lan
},
]
addrs: {
lan: "[fe80::be5f:f4ff:fedd:2752%bridge0]:540"
main: "..."
}
}
and I use nncp-caller running as a background process all days long and
nncp-daemon on upstream machine through inetd. And my connections are
very long-live enough:
# zstd -d < /var/spool/nncp/log.2.zst | grep call-finish
I 2020-12-24T19:07:40.34625988Z [call-finish duration="22540" node="..." rxbytes="15598501100" rxspeed="706453" txbytes="591692" txspeed="26"]
I 2020-12-24T20:38:29.768306849Z [call-finish duration="511" node="..." rxbytes="36480" rxspeed="304" txbytes="175416" txspeed="730"]
I 2020-12-25T00:21:58.498325841Z [call-finish duration="12420" node="..." rxbytes="42044" rxspeed="3" txbytes="171628" txspeed="13"]
I 2020-12-25T04:16:39.095366317Z [call-finish duration="14079" node="..." rxbytes="39000" rxspeed="2" txbytes="135020" txspeed="9"]
I 2020-12-25T05:45:00.081001429Z [call-finish duration="5280" node="..." rxbytes="34908" rxspeed="6" txbytes="60328" txspeed="11"]
I 2020-12-25T07:58:00.080451266Z [call-finish duration="7920" node="..." rxbytes="38640" rxspeed="4" txbytes="146672" txspeed="18"]
I 2020-12-25T08:18:00.090356686Z [call-finish duration="1140" node="..." rxbytes="33036" rxspeed="32" txbytes="32988" txspeed="32"]
I 2020-12-25T09:35:00.09105514Z [call-finish duration="4560" node="..." rxbytes="36656" rxspeed="8" txbytes="122124" txspeed="27"]
I 2020-12-25T09:59:00.090938021Z [call-finish duration="1380" node="..." rxbytes="33100" rxspeed="26" txbytes="33052" txspeed="26"]
I 2020-12-25T10:17:00.127521868Z [call-finish duration="1020" node="..." rxbytes="33140" rxspeed="36" txbytes="407176" txspeed="452"]
I 2020-12-25T12:09:00.09443685Z [call-finish duration="6660" node="..." rxbytes="41452" rxspeed="6" txbytes="158844" txspeed="24"]
I 2020-12-25T12:10:10.107415566Z [call-finish duration="10" node="..." rxbytes="32748" rxspeed="32748" txbytes="32700" txspeed="32700"]
(short ones are because I disconnect my laptop from the network). Are
you sure onlinedeadline option on the node you connect *to* are in the
node's section, and not inside "calls"? But I will check all that
workability again on holidays next year.
>I am considering just running nncp-call instead of nncp-caller as a systemd
>service, hoping that perhaps it would send periodic pings to notice if the
>remote end goes away (does it?)
onlinedeadline exactly tells how many seconds to await and consider peer
dead if no replies were received from it. It should work (as nncp-caller
too :-)). Actually -call and -caller uses completely the same
code/functions and -caller is just a loop waiting when to call -call's
function to connect to another host. Of course there can be bugs, so
soon I will check that again.
>A -> B -> C
>Now, you use nncp-xfer or nncp-bundle to offload data for B. But instead of
>plugging the USB stick/whatever into B, you plug it into C and load it in.
>Now what happens?
C will:
>- Ignore it all?
Everything here is very simple. And I am surprised that there is no
information about nncp-xfer's directory layout description in
documentation. Have to fix that too soon!
For example I sent some file from node 2BV...VCQ to node NFG...Y2A (I
stripped that long Base32 identifiers) and run nncp-xfer -mkdir on
completely empty directory (representing removable storage). It will
create the following:
/tmp/shared
├── 2BVYXV6RWH74NXWRMD2SLDX44TEPSWP47TVR7NPTVA6Z63WJEVCQ <- node itself
│ └── 2BVYXV6RWH74NXWRMD2SLDX44TEPSWP47TVR7NPTVA6Z63WJEVCQ
└── NFGW32PP4WLOCXSY5KGGJBQTM3GOGHZJ6K745TBHUYG6HDZ2JY2A <- destination node id
└── 2BVYXV6RWH74NXWRMD2SLDX44TEPSWP47TVR7NPTVA6Z63WJEVCQ <- source node id
└── CS5AE4UVOV4JRT3UKLYZRDHFHQG4BIYMTNOX3V7QPFAOTE3I72KA <- packet itself
Let's close our eyes on "double" 2BV...VCQ directories -- this is just a
possibility to send packets to "self". That shared directory holds
information of "destination" nodes. Each "destination" node holds
directories with "source" nodes. And each of "source" node contains
packets themselves (packet's filename is a checksum of its contents). If
-xfer, running on 2BV...VCQ, has outbound packets to NFG...Y2A and it
sees that specified shared directory contains NFG...Y2A, then it will
create 2BV...VCQ source subdirectory and place the packet inside it.
If no NFG...Y2A exists, then that storage won't travel to that node so
no packet copying will be done. Of course someone have to either create
that directory manually, or run -xfer with -mkdir option (with -node too
as a rule) at least once.
When -xfer sees directory with the self node's id, then it will treat
packets inside it as inbound ones.
In your example if you "nncp-xfer -mkdir -node B /mnt" on completely
blank storage, then only "B/A/pkt" will be created (ok, also "A/A/"). So
then nncp-xfer is run on C-node, because of lack of "C/" that storage
will be completely ignored.
When you use "-via B", then special "transition" packet is created for
B-node. Technically it is a packet with special type, that only contains
node's id to which you must copy packet's contents. And its contents is
just an ordinary encrypted packet that you will create to C-node when
not specifying -via. So packet is literally just wrapped and encrypted
to via-node(s). Because it can be decrypted only by B-node, then no
observer can get information about its contents (well, except for
niceness-level) and determine is it just an ordinary file transmission
or transition packet. So if you created "-via B C" packet, then anyway
only B-node can decrypt it and see that actually it is transition packet
to C-node. C-node just can not do anything with the data encrypted only
to B-node.
This resembles (not intentionally!) onion encryption that is used for
example in Tor. Each packet is wrapped inside another one and any
intermediate node knows only where to send it further and from which it
was received.
All that spool and -xfer's shared directories contain only encrypted
packets. They can be processed only after authenticated decryption. So
you can not even copy them because you do not know anything about them,
except for: http://www.nncpgo.org/Encrypted.html
sender/recipient node's id, niceness level and... that is all. You have
to check the header's signature by sender's public key, perform
ephemeral key agreement to get the symmetric key, decrypt the content's
with that symmetric key. Even the "real" size of the contents is
encrypted and packet can contain small email message and megabytes of
junk after it.
Technically it is rather simple to add ability to encrypt packet to
multiple recipients at once. Just encrypt the same single symmetric key
to each node with ephemeral DH keys. It will add just a few dozens of
bytes per each additional node. So we can add everyone in the -via path
as an additional recipient to the packet and transitional packets inside
it, without any considerable CPU/disk space overhead -- and everyone in
the -via path (and target's node) should be able to process the packet.
One drawback that it reveals all "participants". So onion encryption is
useless there. But I do not want to say that it is unacceptable (if user
is given the choice). NNCP anyway was never anonymity preserving network.
Another complication that each packet can not belong to just a single
target node. Possibly making (symbolic?) links of the same file to
multiple nodes is enough. And if accidentally some out-of-band node sees
the packet that it can process -- it will do it, like in your example
"C" will process "-via B"-ed packet.
Moreover that gives ability to multicast packets. I liked the idea of
hierarchical multicasting in Usenet, but I have never used it. But
several years I was point in FidoNet so I saw that in practice. Actually
FidoNet does not have multicasting ability in its transport protocols:
inbound message to echo-area is fed to echo-processor that just creates
copied of the message to other outbounds nodes. Of course that can be
done with NNCP and additional news-like, echo-processor-like software.
But possibly it could be builtin NNCP out-of-box somehow. I am not sure
about that, but multicasting idea delights me much! Will think about all
of that.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2020-12-27 9:53 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-27 4:48 Assorted NNCP questions John Goerzen
2020-12-27 9:53 ` Sergey Matveev [this message]
2020-12-28 4:34 ` John Goerzen
2020-12-28 7:37 ` Sergey Matveev
2020-12-28 18:32 ` John Goerzen
2020-12-28 19:43 ` Sergey Matveev
2020-12-30 12:01 ` Sergey Matveev