Thanks ❤️ for trying out. If you encounter any problems/issues. Then feel free to ask them here. 🙂
Thanks ❤️ for trying out. If you encounter any problems/issues. Then feel free to ask them here. 🙂
Thanks ❤️ for asking this question. Yes, exactly the search engine does not share any data except the search query and IP address and nothing else making it really private though we will be adding tor and I2P feature which will also remove the concern of sharing the IP address to the upstream search engine as well. 🙂
Ok, thanks ❤️. I really appreciate your help. 🙂 .
Ok sure, thanks for the suggestion, I would make the change as soon as possible. Also, next time, I would make sure to provide a brief description with the release announcement.
Thanks for taking a look at my project 🙂 .
We are already planning to have an initial support for this added soon in the coming releases. Right now, we are looking for someone who has more in depth knowledge on how to manage memory more efficiently like reduce heap usages, etc. So if you could help with this, I would suggest letting us know. 🙂
I am sorry for being late to reply, I think I would suggest opening an issue on this topic here:
https://github.com/neon-mmd/websurfx/issues
Because I feel it would be better to have a discussion there. Also, I will be able to explain in more depth :).
Sorry for the delay in reply.
Thanks for the suggestion :). Currently, we don’t have any plans supporting windows and macOS because they are known notoriously bad for privacy reasons. And you may never know what they have in their source code like they are doing all notorious stuff, data mining, you name it. Because of which, we have no plans supporting it as of now, but if we get good amount of feature request in this are then we might consider adding support for these distros too :).
Sorry for the delay in reply.
Thanks for the feedback on my project :).
Sorry for the delay in reply.
Great project, thanks for sharing.
You’re welcome :).
Quick question: will this be self-hosted only or will there be a public instance or something the like?
Yes, actually we are working on providing a page dedicated to allow everyone in the community to contribute their Websurfx
instance which will allow others to use them and try them out. Something similar to what Searx
does (searx.space). So right now it is still a work in process.
Sorry for the delay in reply.
Thanks for checking out my project :). Though right now we do not provide a public instance right now, but we are already working on this. Like we are working to provide a page dedicated to show all the instances hosted by the community members which will allow others to use them and try them out as well :).
Ok no problem :). If you need any help regarding anything, just DM us/me here or at our Discord server. We would be glad to help :).
Ahh, I see, Why didn’t I remember this before that I can do something like this. Thanks for the help :). Actually the thing is I am not very good at docker, and I am in the process of finding someone who can actually work on in this area like for example reducing build times, caching, etc. One of the things we want to improve right now is reducing build time like I am using layered caching approach but still it takes about 800 seconds which is not very great. So if you are interested then I would suggest making a PR at our repository. We would be glad to have you as part of the project contributors. And Maybe in future as the maintainer too. Currently, the Dockerfile looks like this:
FROM rust:latest AS chef
# We only pay the installation cost once,
# it will be cached from the second build onwards
RUN cargo install cargo-chef
WORKDIR /app
FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM chef AS builder
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --recipe-path recipe.json
# Build application
COPY . .
RUN cargo install --path .
# We do not need the Rust toolchain to run the binary!
FROM gcr.io/distroless/cc-debian12
COPY --from=builder /app/public/ /opt/websurfx/public/
COPY --from=builder /app/websurfx/config.lua /etc/xdg/websurfx/config.lua # -- 1
COPY --from=builder /app/websurfx/config.lua /etc/xdg/websurfx/allowlist.txt # -- 2
COPY --from=builder /app/websurfx/config.lua /etc/xdg/websurfx/blocklist.txt # -- 3
COPY --from=builder /usr/local/cargo/bin/* /usr/local/bin/
CMD ["websurfx"]
Note: The 1,2 and 3 marked in the Dockerfile are the files which are the user editable files like config file and custom filter lists.
Sorry for the delay in the reply.
Ok, thanks for suggesting this out. I have not thought about particularly in this area, but I would be really interested to have the docker image uploaded to docker hub. The only issue is that the app requires that the config file and blocklist and allowlists should be included within the docker hub. So the issue is that if a prebuilt image is provided, then is it possible to edit it within the docker container ?? If so then it is ok, otherwise it would still be good, but it would limit the usage to users who are by default satisfied by the default config. While others would still need to build the image manually, which is not very great.
Also, As side comment in case you have missed this. Some updates on the project:
custom
filter lists feature merged. If you wish to take a look at this PR, here.
Hello again :)
Sorry for the delayed reply.
It is essentially, how we are achieving the Ad-free results
is when we fetch the results from the upstream search engines. We then take the ad results from all of them, bring it to a form where it is aggregatable and then aggregate it. That’s how we achieve it.
Hello again :)
Sorry for the delayed reply.
Right now, we do not have ranking in place, but we are planning to have it soon. Our goal is to make it as organic as possible, so you don’t get unrelated results when you query something through our engine.
What the project does is it takes the user query and various search parameters if necessary and then passes it to the upstream search engines. It then gets its results with the help of a get request to the upstream engine. Once all the results are gathered, we bring it to a form where we can aggregate the results together and then remove duplicate results from the aggregated results. If two results are from the same engine, then we put both engine’s name against the search result. That’s what is all going, in simple terms :slight_smile: . If you have more doubts. Feel free to open an issue at our project, I would be glad to answer.
Yes, it is, but I just wanted to emphasize that my project is also open source because if I don’t add this then it can raise some doubts whether it is open source or not. So to make it clear, I added it.
Hello again :)
I am sorry for being late to reply, I think I would suggest opening an issue on this topic here:
https://github.com/neon-mmd/websurfx/issues
Because I feel it would be better to have a discussion there. Also, I will be able to explain in more depth.
(full disclosure: I am the owner of the project)
Hello again :)
I am sorry for being late to reply, I think I would suggest opening an issue on this topic here:
https://github.com/neon-mmd/websurfx/issues
Because I feel it would be better to have a discussion there. Also, I will be able to explain in more depth.
(full disclosure: I am the owner of the project)
Thanks ❤ for trying out our project. Yes, that is something that is being worked on currently. Though some pages do support mobile layout, but others are a work in progress. So they will be worked on soon, probably in the next few releases. 🙂