Hello selfhosted! Sometimes I have to transfer big files or a large amounts of small files in my homelab. I used rsync but specifying the IP address and the folders and everything is bit fiddly. I thought about writing a bash script but before I do that I wanted to ask you about your favourite way to achieve this. Maybe I am missing out on an awesome tool I wasn’t even thinking about.
Sounds very straight forward. Do you have a samba docker container running on your server or how do you do that?
Do you really need a container for Samba?
I see the benefits of containers, but a use would be overkill.
Set up smb on my file share VM.
My dedicated docker host accesses it through an NFS mount.
I just type
sftp://[ip, domain or SSH alias]
into my file manager and browse it as a regular folderYOU CAN DO THAT???
Linux is truly extensible and it is the part I both love and struggle to explain the most.
I can sit at my desktop, developing code that physically resides on my server and interact with it from my laptop. This does not require any strange janky setup, it’s just SSH. It’s extensible.
I love this so much. When I first switched to Linux, being able to just list a bunch of server aliases along with the private key references in my .ssh/config made my life SO much easier then the redundantly maintained and hard to manage putty and winscp configurations in Windows.
Dolphin?
Any file manager on Linux supports this
I have two servers, one Mac and one Windows. For the Mac I just map directly to the smb share, for the Windows it’s a standard network share. My desktop runs Linux and connects to both with ease.
I dont have a docker container, I just have Samba running on the server itself.
I do have an owncloud container running, which is mapped to a directory. And I have that shared out through samba so I can access it through my file manager. But that’s unnecessary because owncloud is kind of trash.