-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fetchTree: Allow fetching plain files #6548
Conversation
One thing I’m really unhappy with is the |
I would remove the "unpack" parameter and have two different input types: "tarball" and "file". Calling it "url" is not very meaningful since most fetchers use a URL, they just differ in what they do with the URL (e.g. in this case, unpack a tarball, or use a plain file). |
That’s actually what I had initially. I moved away from it because it was making the URL parsing slightly weird: I had the … but now that I write it down, it feels much less dirty than the actual weird semantics of the current PR, so you’re right, I’ll switch back to that 🙂 |
Add a new `file` fetcher type, which will fetch a plain file over http(s), or from the local file. Because plain `http(s)://` or `file://` urls can already correspond to `tarball` inputs (if the path ends-up with a know archive extension), the URL parsing logic is a bit convuluted in that: - {http,https,file}:// urls will be interpreted as either a tarball or a file input, depending on the extensions of the path part (so `https://foo.com/bar` will be a `file` input and `https://foo.com/bar.tar.gz` as a `tarball` input) - `file+{something}://` urls will be interpreted as `file` urls (with the `file+` part removed) - `tarball+{something}://` urls will be interpreted as `tarball` urls (with the `tarball+` part removed) Fix #3785 Co-Authored-By: Tony Olagbaiye <[email protected]>
OK, I’ve made them separate input types, and added the |
Does a “file” type need to have the “flake” parameter? Or can it default to false because if you do not unpack it can’t be a flake at all. Perhaps the file it itself can be raw flake.nix file, but that seems too rare/edge case; I’d prefer the simpler situation where “file+” by default knows it cannot be a flake. |
Ideally it probably shouldn’t indeed. In practice I couldn’t find an easy and clean way to do it ;) |
In URL form, the schema must be `tarball+http://`, `tarball+https://` or `tarball+file://`. | ||
If the extension corresponds to a known archive format (`.zip`, `.tar`, | ||
`.tgz`, `.tar.gz`, `.tar.xz`, `.tar.bz2` or `.tar.zst`), then the `tarball+` | ||
can be dropped. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't that change breaking backwards compatibility?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, on the contrary: Currently http(s)/file urls are required to have such an extension and are treated as tarballs. The new behavior makes it so that what was already accepted still is, and with the same semantics, and the rest is treated as file urls
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problematic bit though, is that adding a new tarball extension would break backwards-compat a bit: with this change, https://foo.com/bar.tar.lzma
is fetched as a plain file, and if we wanted to add .tar.lzma
as a recognized archive extension, it would become an directory tarball
can be dropped. | ||
|
||
* `file`: Plain files or directory tarballs, either over http(s) or from the local | ||
disk. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might as well rename the function to builtins.fetch
if it downloads arbitrary things, even non-tree ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be an idea, indeed :)
I’d leave that for later though if you’re OK with that
|
||
std::pair<StorePath, Input> fetch(ref<Store> store, const Input & input) override | ||
{ | ||
auto file = downloadFile(store, getStrAttr(input.attrs, "url"), input.getName(), false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A problem with the use of downloadFile()
is that it uses FileIngestionMethod::Flat
instead of FileIngestionMethod::Recursive
. Currently it's assumed that all flake inputs use recursive+sha256 with a name of "source". This allows inputs to be substituted using the narHash
attribute in the lock file (see Input::computeStorePath()
). However, the lazy trees branch will probably remove the ability to substitute inputs anyway...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh indeed, I didn’t notice the Flat
here. Would it be an issue to make downloadFile
use Recursive
? Afaik none of the other use sites really care about how the result is hashed since it’s only used internally by the fetchers but doesn’t leak outside.
(Alternatively I could just wrap that by adding an extra step that would copy the output to a Recursive
CA location, but if that can be avoided it’s even better)
This pull request has been mentioned on NixOS Discourse. There might be relevant details there: |
This pull request has been mentioned on NixOS Discourse. There might be relevant details there: https://discourse.nixos.org/t/tweag-nix-dev-update-31/19481/1 |
PR to add support to flake-compat: |
Replace the
tarball
fetcher by a more genericurl
one, mostly similar except that it takes an extraunpack
parameter (false by default) to specify whether the target is an archive to extract or a plain file.Also adds a few convolutions for backwards compatibility:
tarball
fetcher type is kept. It’s now an alias forurl
, but withunpack
defaulting totrue
tarball
fetcher (i.e.http(s)://
andfile://
urls whose path ends with a known archive extension) still do, meaning thaturl
fetcherFix #3785
Originally taken from #4153 (although most of it has been rewritten)
/cc @tomberek