Title
#announcements
Krishna Sangeeth KS

Krishna Sangeeth KS

09/04/2022, 12:50 PM
Hey Folks, Thanks for creating this project. It looks very interesting and i am excited to try it out. I am trying out orchest for the first time today and I have gone for the self hosted mode of setting up in K8s. I have used the
orchest install
option and installation was clean. I am trying to run the sample pipeline from
global-key-value
. It fails for some reason and i couldn't find any logs associated with it. In the k8s namespace, i briefly saw this pod coming up and then i think it got cleaned up. Any pointers on how to debug this further ?
pipeline-run-task-b9ce091c-baad-4477-915a-77f46dc4c192-12230460   0/2     Init:1/2   0          18s
juanlu

juanlu

09/05/2022, 6:01 AM
hello @Krishna Sangeeth KS! if you click on the failing step, you should see a bar on the right where you can access the step logs. alternatively, you can also click the "logs" button on the top right, which should show you the same information. do you see anything there that gives you a hint of what failed?
Krishna Sangeeth KS

Krishna Sangeeth KS

09/05/2022, 6:18 AM
Hey @juanlu unfortunately the logs were empty . It was the first thing i checked.
juanlu

juanlu

09/05/2022, 6:20 AM
that's unfortunate - let me check if I can reproduce the problem
Yannick

Yannick

09/05/2022, 7:26 AM
Hmmm curious to know what happened here. Given the
Init:1/2
I would guess that one of the init containers failed, resulting in a
Failure
of the Step. @juanlu Let us know whether you can reproduce 😸
juanlu

juanlu

09/05/2022, 8:13 AM
I just imported https://github.com/orchest-examples/global-key-value-store, ran the two steps, and it worked first time on my Cloud instance. so I can't reproduce
Yannick

Yannick

09/05/2022, 8:23 AM
@Krishna Sangeeth KS Given that @juanlu was unable to reproduce the issue, would you mind sharing some details about your setup? You mentioned you are self-hosting Orchest: • What version of Orchest are you running? • Where are you running Orchest (e.g. in EKS, minikube, EC2 instance, etc.)? If you are able to share the logs of the
pipeline-run-task
that has failed, then that would be amazing! 🤓
Rick Lamers

Rick Lamers

09/05/2022, 8:29 AM
If you are able to share the logs of the
pipeline-run-task
that has failed, then that would be amazing!
I think the logs were empty
8:30 AM
👋 hi @Krishna Sangeeth KS, just wanted to welcome you to the Slack.
Yannick

Yannick

09/05/2022, 8:38 AM
I think the logs were empty
I meant the logs in Kubernetes (the output line indicates @Krishna Sangeeth KS is using
k9s
or knows his way around
kubectl
), not the logs that we show in Orchest of course 😉
Krishna Sangeeth KS

Krishna Sangeeth KS

09/14/2022, 4:08 PM
@Yannick, sorry for the delay in responding.
What version of Orchest are you running?
orchest-cli, version 0.5.2
Where are you running Orchest (e.g. in EKS, minikube, EC2 instance, etc.)?
This is my local machine , using kubernetes from docker desktop.
4:10 PM
The pipeline task pod came up only for a short moment and got cleaned up. I could get the init container logs as below
kubectl logs -f pipeline-run-task-0219f617-087e-4de3-a85a-a2830a16fbf3-3067194210 -n orchest -c init                                                                                                                                                        (py38)
time="2022-09-14T16:08:51.926Z" level=info msg="Starting Workflow Executor" executorType=emissary version=v3.2.6
time="2022-09-14T16:08:51.929Z" level=info msg="Creating a emissary executor"
time="2022-09-14T16:08:51.929Z" level=info msg="Executor initialized" deadline="0001-01-01 00:00:00 +0000 UTC" includeScriptOutput=false namespace=orchest podName=pipeline-run-task-0219f617-087e-4de3-a85a-a2830a16fbf3-3067194210 template="{\"name\":\"step\",\"inputs\":{\"parameters\":[{\"name\":\"step_uuid\",\"value\":\"59f13504-785c-43eb-b3c0-9284c7d30465\"},{\"name\":\"image\",\"value\":\"10.96.0.2/orchest-env-2e68f108-c9c8-40aa-939f-2368b84d6df4-9327009b-602f-49d8-a727-fee7a063fa7c:1\"},{\"name\":\"working_dir\",\"value\":\"\"},{\"name\":\"project_relative_file_path\",\"value\":\"example.ipynb\"},{\"name\":\"pod_spec_patch\",\"value\":\"{\\\"terminationGracePeriodSeconds\\\": 1, \\\"containers\\\": [{\\\"name\\\": \\\"main\\\", \\\"env\\\": [{\\\"name\\\": \\\"ORCHEST_STEP_UUID\\\", \\\"value\\\": \\\"59f13504-785c-43eb-b3c0-9284c7d30465\\\"}, {\\\"name\\\": \\\"ORCHEST_SESSION_UUID\\\", \\\"value\\\": \\\"2e68f108-c9c8-40aa080bd4af-10c1-4bb1\\\"}, {\\\"name\\\": \\\"ORCHEST_SESSION_TYPE\\\", \\\"value\\\": \\\"interactive\\\"}, {\\\"name\\\": \\\"ORCHEST_PIPELINE_UUID\\\", \\\"value\\\": \\\"080bd4af-10c1-4bb1-b500-22dd7bf9d519\\\"}, {\\\"name\\\": \\\"ORCHEST_PIPELINE_PATH\\\", \\\"value\\\": \\\"/pipeline.json\\\"}, {\\\"name\\\": \\\"ORCHEST_PROJECT_UUID\\\", \\\"value\\\": \\\"2e68f108-c9c8-40aa-939f-2368b84d6df4\\\"}, {\\\"name\\\": \\\"ORCHEST_NAMESPACE\\\", \\\"value\\\": \\\"orchest\\\"}, {\\\"name\\\": \\\"ORCHEST_CLUSTER\\\", \\\"value\\\": \\\"cluster-1\\\"}], \\\"restartPolicy\\\": \\\"Never\\\", \\\"imagePullPolicy\\\": \\\"IfNotPresent\\\"}]}\"},{\"name\":\"tests_uuid\",\"value\":\"59f13504-785c-43eb-b3c0-9284c7d30465\"},{\"name\":\"container_runtime\",\"value\":\"docker\"},{\"name\":\"container_runtime_image\",\"value\":\"orchest/image-puller:v2022.08.11\"}]},\"outputs\":{},\"metadata\":{},\"container\":{\"name\":\"\",\"image\":\"10.96.0.2/orchest-env-2e68f108-c9c8-40aa-939f-2368b84d6df4-9327009b-602f-49d8-a727-fee7a063fa7c:1\",\"command\":[\"/orchest/bootscript.sh\",\"runnable\",\"\",\"example.ipynb\"],\"resources\":{\"requests\":{\"cpu\":\"1m\"}},\"volumeMounts\":[{\"name\":\"userdir-pvc\",\"mountPath\":\"/data\",\"subPath\":\"data\"},{\"name\":\"userdir-pvc\",\"mountPath\":\"/userdir/projects\",\"subPath\":\"projects\"},{\"name\":\"userdir-pvc\",\"mountPath\":\"/project-dir\",\"subPath\":\"projects/global-key-value-store\"},{\"name\":\"userdir-pvc\",\"mountPath\":\"/pipeline.json\",\"subPath\":\"projects/global-key-value-store/main.orchest\"}]},\"initContainers\":[{\"name\":\"image-puller\",\"image\":\"orchest/image-puller:v2022.08.11\",\"command\":[\"/pull_image.sh\"],\"env\":[{\"name\":\"IMAGE_TO_PULL\",\"value\":\"10.96.0.2/orchest-env-2e68f108-c9c8-40aa-939f-2368b84d6df4-9327009b-602f-49d8-a727-fee7a063fa7c:1\"},{\"name\":\"CONTAINER_RUNTIME\",\"value\":\"docker\"}],\"resources\":{},\"volumeMounts\":[{\"name\":\"container-runtime-socket\",\"mountPath\":\"/var/run/runtime.sock\"}]}],\"retryStrategy\":{\"limit\":\"0\",\"backoff\":{\"maxDuration\":\"0s\"}},\"securityContext\":{\"runAsUser\":0,\"runAsGroup\":1,\"fsGroup\":1},\"podSpecPatch\":\"{\\\"terminationGracePeriodSeconds\\\": 1, \\\"containers\\\": [{\\\"name\\\": \\\"main\\\", \\\"env\\\": [{\\\"name\\\": \\\"ORCHEST_STEP_UUID\\\", \\\"value\\\": \\\"59f13504-785c-43eb-b3c0-9284c7d30465\\\"}, {\\\"name\\\": \\\"ORCHEST_SESSION_UUID\\\", \\\"value\\\": \\\"2e68f108-c9c8-40aa080bd4af-10c1-4bb1\\\"}, {\\\"name\\\": \\\"ORCHEST_SESSION_TYPE\\\", \\\"value\\\": \\\"interactive\\\"}, {\\\"name\\\": \\\"ORCHEST_PIPELINE_UUID\\\", \\\"value\\\": \\\"080bd4af-10c1-4bb1-b500-22dd7bf9d519\\\"}, {\\\"name\\\": \\\"ORCHEST_PIPELINE_PATH\\\", \\\"value\\\": \\\"/pipeline.json\\\"}, {\\\"name\\\": \\\"ORCHEST_PROJECT_UUID\\\", \\\"value\\\": \\\"2e68f108-c9c8-40aa-939f-2368b84d6df4\\\"}, {\\\"name\\\": \\\"ORCHEST_NAMESPACE\\\", \\\"value\\\": \\\"orchest\\\"}, {\\\"name\\\": \\\"ORCHEST_CLUSTER\\\", \\\"value\\\": \\\"cluster-1\\\"}], \\\"restartPolicy\\\": \\\"Never\\\", \\\"imagePullPolicy\\\": \\\"IfNotPresent\\\"}]}\"}" version="&Version{Version:v3.2.6,BuildDate:2021-12-17T20:00:26Z,GitCommit:db7d90a1f609685cfda73644155854b06fa5d28b,GitTag:v3.2.6,GitTreeState:clean,GoVersion:go1.16.12,Compiler:gc,Platform:linux/arm64,}"
time="2022-09-14T16:08:52.000Z" level=info msg="Start loading input artifacts..."
time="2022-09-14T16:08:52.001Z" level=info msg="Alloc=4701 TotalAlloc=9302 Sys=73553 NumGC=3 Goroutines=3"
4:15 PM
From the logs , saw this message where the timestamp for deadline looks odd. Not sure if it is related.
time="2022-09-14T16:08:51.929Z" level=info msg="Executor initialized" deadline="0001-01-01 00:00:00 +0000 UTC"
Yannick

Yannick

09/14/2022, 4:22 PM
Thanks for sharing! Could you also share the output of running
orchest version
?
This is my local machine , using kubernetes from docker desktop.
And are you running on Mac or Windows?
Krishna Sangeeth KS

Krishna Sangeeth KS

09/14/2022, 4:23 PM
orchest version                                                                                                                                                                                                                                             (py38)
v2022.08.11
I am running Mac.
Yannick

Yannick

09/14/2022, 4:25 PM
Are you running Rosetta emulation? (https://support.apple.com/en-us/HT211861) Without it Orchest won't run, because our Docker images don't run on the
arm
stack yet.
4:25 PM
Given the log output of
Platform:linux/arm64
I am guessing you aren't?
Krishna Sangeeth KS

Krishna Sangeeth KS

09/14/2022, 4:27 PM
Oh. Interesting. I have rosetta also setup, But i might have used the normal terminal. Do you suggest running
orchest install
via rosetta emulated terminal ?
Yannick

Yannick

09/14/2022, 4:31 PM
Do you suggest running
orchest install
via rosetta emulated terminal ?
I don't think that will make a difference (but I don't really have any experience in this area) 🤔 @Rick Lamers Given that you are running on an M1 as well, how did you set that up?
Krishna Sangeeth KS

Krishna Sangeeth KS

09/14/2022, 4:37 PM
Hmm. Okay. Because for Docker App, I don't see an option to run on rosetta mode unlike terminal app. If you don't mind , i am curious to know which component of
orchest
would have trouble in running on arm architecture.
Yannick

Yannick

09/14/2022, 4:44 PM
From a theoretical standpoint there isn't anything in Orchest that prevents us from running on arm, however, Orchest is fully containerized and the images are only built for amd64. Thus we can't run on arm architectures (except when you are running Rosetta emulation, which should work out of the box as far as I know).
4:46 PM
Maybe this link helps? https://docs.docker.com/desktop/mac/apple-silicon/ Otherwise I think it is a good idea to wait for a response from @Rick Lamers.
Rick Lamers

Rick Lamers

09/14/2022, 6:09 PM
If you have Rosetta enabled on the system and you're using the Apple Silicon version of Docker then things should work without any "special" type of installation. E.g. once Kubernetes is enabled on Docker for Desktop you can run
orchest install
which should find the cluster configuration file and bring up Orchest. I'll try to reproduce this issue on my M1 Mac and report back any findings
7:05 PM
I can confirm Docker for Desktop on macOS has permission errors with image pulling. I recall now that that’s why we haven’t listed it as an installation target yet in the docs https://docs.orchest.io/en/stable/getting_started/installation.html#installing-orchest The error is:
time="2022-09-14T19:00:32Z" level=error msg="Error while applying layer: ApplyLayer exit status 1 stdout: Error while loading /var/lib/containers/storage/vfs/dir/68a85fa9d77ecac87de23805c4be8766bda08a86787e324036cbcf84b62640fa: Permission denied\n stderr: "
The easiest thing to do is to run the convenience script: https://github.com/orchest/orchest#installation
curl -fsSL <https://get.orchest.io> > convenience_install.sh
bash convenience_install.sh
It will use
minikube
and Docker on macOS and should work without issues. Note it’s a good idea to disable Docker for Desktop Kubernetes and after that operation completes to start a new terminal to make sure the convenience script picks up the cluster configuration of
minikube
and not Docker for Desktop’s Kubernetes. If this all sounds like too much work, you can always check out the free tier at https://cloud.orchest.io Or install on a Linux host using something like a EC2 instance on AWS.
7:13 PM
Verified no issues with
minikube
on macOS M1 with Rosetta available on the system
7:18 PM
Note, with some of the upcoming changes we’re making to Orchest container image building we might be able to support Docker for Desktop Kubernetes over the next couple of weeks (2-6 is by best estimate).
Krishna Sangeeth KS

Krishna Sangeeth KS

09/14/2022, 7:20 PM
Thanks @Rick Lamers for confirming , i am also making an attempt with rancher desktop, if it doesn't work out I will try with
minikube
. I do remember that there were bunch of problems in pulling image, i did do manual docker pulls which were faster and then did
orchest install
. That seemed to work then as all the pods were up. I did see another message in the argo controller about
Max duration limit reached
. Not sure if that was also somehow related.
If this all sounds like too much work, you can always check out the free tier at https://cloud.orchest.io
Wanted to explore the system in more detail as i am learning about building cloud native apps as well. So this is all useful learning for me.
Rick Lamers

Rick Lamers

09/14/2022, 7:37 PM
Wanted to explore the system in more detail as i am learning about building cloud native apps as well. So this is all useful learning for me.
👍
7:38 PM
Rancher Desktop uses k3s under the hood, so I'm fairly optimistic that that will work in principle. Not sure how well they handle multi-arch k8s apps (which is what Orchest is, some of our containers are available in both amd64 and arm64).
7:39 PM
Worth repeating the main deploy target we're optimizing for is cloud managed k8s clusters like EKS and GKE. In addition to single node Linux based setups like EC2 Ubuntu + Microk8s or
minikube
Yannick

Yannick

09/15/2022, 9:04 AM
I did see another message in the argo controller about
Max duration limit reached
I would guess that this message was shown because Argo Workflows uses initContainers to work and you most likely did not pre-pull these initContainers (like you did for the Orchest service containers). This way you end up with the same permissions errors when Docker/Kubernetes tries to pull the image for you.
Krishna Sangeeth KS

Krishna Sangeeth KS

09/15/2022, 4:24 PM
@Rick Lamers Understood.@Yannick, yeah that makes sense.