NodeJS – Request file and zip it


I am currently in the process of creating a REST API for my personal website. I’d like to include some downloads and I would like to offer the possibility of selecting multiple ones and download those as a zip file.
My first approach was pretty easy: Array with urls, request for each of them, zip it, send to user, delete. However, I think that this approach is too dirty considering there are things like streams around which seems to be quite fitting for this thing.

Now, I tried around and am currently struggling with the basic concept of working with streams and events throughout different scopes.

The following worked:

const r = request(url, options);
r.on('response', function(res) {

From my understanding r is an incoming stream in this scenario and I listen on the response event on it, as soon as it occurs, I pipe it to a stream which I use to write to the file system.

My first step was to refactor this so it fits my case more but I already failed here:

async function downloadFile(url) {
    return request({ method: 'GET', uri: url });

Now I wanted to use a function which calls “downloadFile()” with different urls and save all those files to the disk using createWriteStream() again:

const urls = ['https://download1', 'https://download2', 'https://download3'];
urls.forEach(element => {
    downloadFile(element).then(data => {

Using the debugger I found out that the “response” event is non existent in the data object — Maybe that’s already the issue? Moreover, I figured that data.body contains the bytes of my downloaded document (a pdf in this case) so I wonder if I could just stream this to some other place?

After reading some stackoveflow threads I found the following module: archiver

Reading this thread: Dynamically create and stream zip to client

@dankohn suggested an approach like that:

  .append(fs.createReadStream(file1), { name: 'file1.txt' })
  .append(fs.createReadStream(file2), { name: 'file2.txt' });

Making me assume I need to be capable of extracting a stream from my data object to proceed.

Am I on the wrong track here or am I getting something fundamentally wrong?

Edit: lmao thanks for fixing my question I dunno what happened


Using archiver seems to be a valid approach, however it would be advisable to use streams when feeding large data from the web into the zip archive. Otherwise, the whole archive data would need to be held in memory.

archiver does not support adding files from streams, but zip-stream does. For reading a stream from the web, request comes in handy.


// npm install -s express zip-stream request

const request = require('request');
const ZipStream = require('zip-stream');
const express = require('express');

const app = express();

app.get('/', (req, res) => {
    var zip = new ZipStream()

    var stream = request('')
    zip.entry(stream, { name: 'picture.jpg' }, err => {
            throw err;          



Update: Example for using multiple files

Adding an example which processes the next file in the callback function of zip.entry() recursively.

app.get('/', (req, res) => {
    var zip = new ZipStream()

    var queue = [
        { name: 'one.jpg', url: '' },
        { name: 'two.jpg', url: '' },
        { name: 'three.jpg', url: '' }

    function addNextFile() {
        var elem = queue.shift()
        var stream = request(elem.url)
        zip.entry(stream, { name: }, err => {
                throw err;
            if(queue.length > 0)


Answered By – Robert

Answer Checked By – Timothy Miller (AngularFixing Admin)

Leave a Reply

Your email address will not be published.