public inbox for nncp-devel@lists.stargrave.org
Atom feed
From: Sergey Matveev <stargrave@stargrave•org>
To: nncp-devel@lists.cypherpunks.ru
Subject: Re: NNCP at scale report
Date: Sun, 3 Jan 2021 00:48:11 +0300	[thread overview]
Message-ID: <X/DqNi1jGTV9JGzB@stargrave.org> (raw)
In-Reply-To: <87zh1rkr94.fsf@complete.org>

[-- Attachment #1: Type: text/plain, Size: 3415 bytes --]

*** John Goerzen [2021-01-02 14:39]:
>I'd be interested to hear more about your use case for chunked transfers.  I
>guess their main use would be if individual files are too large for
>sneakernet?  With the resumable transfers over the network, though, I haven't
>bothered.

Either they are too large for storages used in the sneakernet, or spool
directory does not have enough space for very big files. My main storage
on one of the servers is on the encrypted pool, but the spool itself is
not on encrypted disk (thus it is available after the reboot) and has
zfs set quota=2G on it. By placing "freq: {chunked: ...}" configuration
option for that server, all my nncp-file transfers automatically make
512MiB chunks if file is greater. When my spool is filled up (for
example when tossing is slower than network/whatever transfers) everyone
"waits" until there will be enough space. nncp-xfer checks for free
space and won't even try to copy files. nncp-daemon won't ask for file
download if there is not enough space too. And incoming NNCP directory
(for tossed files) is placed on big encrypted pool. But not simply on
the pool, but on the separate dataset with ZFS deduplication turned on.
That prevents writing on the disk when files are reassembled. Because
chunks are just simply concatenated together, and when NNCP and ZFS
block boundaries are "in sync", then deduplication detects that
reassembled file's block contents are identical and won't write copy on
the disk (because the same block are already here).

>I wondered, but then I wasn't sure if there would be race conditions with
>temporary files, half-written files, etc., and I didn't want to risk it.

Temporary files are placed in tmp/ subdirectory. .part-files can not
exist in tx/ directory. And any partly written file either has .part
suffix (in rx/ directories), or it is placed in tmp/ and then atomically
renamed to necessary file in tx/ directory. So if there is a file in
tx/, then it is complete fsync-ed fully written file with checksum used
its filename.

>My thought was this...  when I travel (hopefully we can do that again this
>year!) my laptop would, when on a good network, continue to send backups to
>the spooling server at my house. However, sometimes something happens - power
>outage, Internet outage, etc.  In that case I would rather have the backups
>-- including the ones previously generated -- go spool on my server where
>they could sit until the home network comes back up.

I see. Well, currently only manual interaction can help with that
situations.

>I also had the thought -- well I could probably just manually copy them over
>to the appropriate tx queue on the server.  Or if nncp-xfer or nncp-bundle
>would have a feature where they would not ignore, but instead ingest, packets
>whose next hop is any node that the machine knows about, I could always just
>send them down a nncp-bundle pipe.

I will think what can be done easily. Probably some kind of additional
nncp- utility for transitional packets.

>What might be useful would be a "maintenance" page in the manual, just
>listing things that a person might do: rotate the log file, look for old
>packets, look for old seen files.

That is good idea. Definitely will write it.

-- 
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263  6422 AE1A 8109 E498 57EF

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2021-01-02 21:48 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-02  3:09 NNCP at scale report John Goerzen
2021-01-02 14:17 ` Sergey Matveev
2021-01-02 20:39   ` John Goerzen
2021-01-02 21:48     ` Sergey Matveev [this message]
2021-01-03  6:08   ` Shawn K. Quinn
2021-01-03 11:47     ` Sergey Matveev