How to download large file in php






















Then, when the user clicks on the DL link, they'll get a file download from the real file but named from the symbolic link. It takes milliseconds to create the symbolic link and is better than trying to copy the file to a new name and download from there. That's what I did because Go Daddy kills the script from running after 2 minutes 30 seconds or so This whole process will then send the file to the browser and it doesn't matter how long it runs since it's not a script.

These 3 lines of code will do all the work of the download readfile will stream the entire file specified to the client, and be sure to set an infinite time limit else you may be running out of time before the file is finished streaming.

If you are using lighttpd as a webserver, an alternative for secure downloads would be to use ModSecDownload. It needs server configuration but you'll let the webserver handle the download itself instead of the PHP script. Generating the download URL would look like that taken from the documentation and it could of course be only generated for authorized users:. Of course, depending on the size of the files, using readfile such as proposed by Unkwntech is excellent. And using xsendfile as proposed by garrow is another good idea also supported by Apache.

I'm not sure this is a good idea for large files. If the thread for your download script runs until the user has finished the download, and you're running something like Apache, just 50 or more concurrent downloads could crash your server, because Apache isn't designed to run large numbers of long-running threads at the same time.

Of course I might be wrong, if the apache thread somehow terminates and the download sits in a buffer somewhere whilst the download progresses. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow. Learn more. Asked 12 years, 9 months ago. Active 2 months ago.

Viewed 50k times. What is the best way to solve this problem? Regards erwing. Improve this question. Erwing Erwing 1 1 gold badge 3 3 silver badges 4 4 bronze badges. Part of this problem might be solved by supporting Range headers, so browsers can pause and resume downloads. Here's a question dealing with that: stackoverflow. Also take a look at this and this SO answers. See it here -- stackoverflow.

Add a comment. Active Oldest Votes. Improve this answer. This is a prefectly working solution, used and tested in many of my projects. Code Revisions 2 Stars 28 Forks 6. Embed What would you like to do? Embed Embed this gist in your website. Share Copy sharable link for this gist.

Learn more about clone URLs. Download ZIP. Download large file from the web via php. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below.

Something like this:. Would it surprise you to know that, even though we split the text document up into 1, chunks, we still only use KB of memory? In this case, the largest chunk is , characters. Generators have other uses, but this one is demonstrably good for performant reading of large files. If we need to work on the data, generators are probably the best way. We can achieve this by using stream methods. Unsurprisingly, this script uses slightly more memory to run than the text file it copies.

For small files, that may be okay. When we start to use bigger files, no so much…. This code is slightly strange. We open handles to both files, the first in read mode and the second in write mode. Then we copy from the first into the second.

We finish by closing both files again. It may surprise you to know that the memory used is KB. That seems familiar. This prevents download. As the file handle closes by itself in my case, I have omitted this and everything works fine.

I'm curious if someone understands why. Darren Gordon Darren Gordon 5 5 silver badges 6 6 bronze badges. Bludream Not in the current form. To handle resuming of downloads you should check for the client header Range. If a bytes value is present you can then use fseek on the file, and send an appropriate Content-Range header before sending it.

I don't recommend this for large files, as the original question states. Smaller files work fine, but I'm using fpassthru on a large file and my download died because "allowed memory size was exhausted". Magmatic Yes, as the manual says for fpassthru it will output the file to the output buffer. How is this tranfering a file? Especially if the file is outside the webroot that solution will not work.

Correct me if I'm wrong but using raw sockets forces you to implement yourself every HTTP feature you're faced to, such as redirections, compression, encryption, chunked encoding It might work in specific scenarios but it isn't the best general purpose solution. Just make sure you switch off output buffering before readfile, and the problem should be fixed.



0コメント

  • 1000 / 1000