Bobinas P4G
  • Login
  • Public

    • Public
    • Groups
    • Popular
    • People

Conversation

Notices

  1. spinflip (spinflip@qoto.org)'s status on Tuesday, 28-Aug-2018 11:30:00 UTC spinflip spinflip

    I'm working on a project that involves retrieving large (~2-8 GB) .zip files through HTTP and storing them for later processing. I've written a script that uses an API to lookup and generate URLs for a series of needed files, and then attempts to stream each file to storage using requests.get().iter_content.

    The problem is, my connection isn't perfectly stable (and I'm running this on a laptop which sometimes goes to sleep). When the connection is interrupted, the transfer dies and I need to restart it.

    What would be the best way to add a resume capacity to my file transfer? So that if the script stalls or the connection drops, it would be possible to resume the download from where it failed?

    In conversation Tuesday, 28-Aug-2018 11:30:00 UTC from qoto.org permalink
    • 🎓 Dr. Freemo :jpf: 🇳🇱 repeated this.

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • Privacy
  • Source
  • Version
  • Contact

Bobinas P4G is a social network. It runs on GNU social, version 2.0.1-beta0, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All Bobinas P4G content and data are available under the Creative Commons Attribution 3.0 license.