Replies: 3 comments 2 replies
-
This will not answer your questions, but have you every thought about using RedHat OpenShift Dev Spaces (https://developers.redhat.com/products/openshift-dev-spaces)? I know that this is a commercial software, but they build it based on the open source Eclipse Che (https://eclipse.dev/che). Unfortunately Eclipse Che does not run on ARM as far as I know. I just mention it, because I would be happy to use it at home as well. Maybe also of interest: |
Beta Was this translation helpful? Give feedback.
-
I somehow understand your initial focus on the hardware, but in my opinion you have a kind of chicken / egg problem. For example if you want to use Eclipse Che, you cannot use ARM (as far as I know). Therefore focusing on ARM SBCs (or other ARM systems) would not allow you to use Eclipse Che later. Another example is Pytorch for AI. If you want to have hardware acceleration here you need a CUDA capable GPU (https://www.geeksforgeeks.org/pytorch-system-requirements/). As I understand it this would not work with Google Coral AI or just with some conversions and probably backdraws. I think there are many other such examples. Therefore your hardware selection may later restrict your software use. Just be aware of that. |
Beta Was this translation helpful? Give feedback.
-
Hello! Pleased to meet you, you just got my attention! :)
What you're describing here is kinda the same thing as what my associate and I are currently working on! Lets first talk about the stack. This is obviously just the main idea, things may (and will surely) change, but that's the starting point:
OKD brings its operators, applications, helm integration, prometheus monitoring, alerting, automated upgrades and a lot more, everything out of the box, enterprise grade, for free. The most interesting point : the automation. We're currently working on the core of a new infrastructure deployment tool. This won't be yet another terraform variation, we're seeing things differently. The name of the tool is Harmony. At the heart of the tool : rust. A simple DSL, type safety, ease of use, focused on DX. I won't go into more details here, I'll leave that to my associate who designed and implemented all the base architecture of the project. The foundations are already working well.
This is very basic and opinionated for now, but its only the beginning :-) The project will be open source. For the Mini-Rack part of the story, I have everything in hand except my 3 worker nodes, but they are in the mail. I should soon be able to complete the hardware part of the build. Lots of design / 3d printing in this thing :-) I haven't opened a build showcase yet but its on my todos. Hardware (from top to bottom) :
3D printed stuff (my designs) :
If you feel like being our first beta user, that would be nice! You can come chat with us on our discord : https://discord.gg/jnCAtMYa. Its mainly in french (because we're from Quebec, and the server is super small) but no problem at all speaking in english. But nomatter discord, lets also continue the discussion here :) What do you think of all that? Sylvain |
Beta Was this translation helpful? Give feedback.
-
Hello,
I would like to get some help and/or discussion on my mini rack idea. I'm a software developer, so I'm hoping to get a lot of information about hardware and how to build my rack.
The goal of my mini-rack is to create a complete development environment for AI and/or web development for up to 5 developers who then use a laptop to connect to the cluster of the rack.
In general, I've been working with Kubernetes for years, so I generally want to run everything as containers on the system. A short list of tools I currently use:
Building on this, there are some “infrastructure tools” such as:
My first idea was Raspberry Pi's, but I think systems with 32 GB RAM or more would be better, the question would be what would be suitable?
Jetson Nano may be a nice addition for the AI calculations, so I would rather build a hybrid system.
I would like to use an external NAS, if possible with S3 protocol for any kind of user data (git repos, AI models, data), so that I have a centrally usable storage layer.
A wifi access point and possibly a router for LAN or mobile internet would be good.
In general, I would like to do everything with OpenTofu / Terraform for provisioning the system, e.g. Cisco hardware can do this so that I can quickly set up the system again depending on use, and I would also like to build my own images for the computing modules so that I can also quickly reinstall them.
The important goal for me is to optimize the costs for computing and space in the case. I don't need a high-performance cluster, but there has to be enough computing power for simple things for several users, but at the same time the system should also be easy to transport, I was thinking of something like this, for example https://gatorco.com/shop-by-category/racks/shock-racks/
For me, it would just be a case of plugging it into the socket, opening the laptop and connecting to the network in the case.
I would be very happy to have a discussion and would welcome any suggestions and questions.
Beta Was this translation helpful? Give feedback.
All reactions