FAQ

How do i access nautilus?

See Get access

How do I use Kubernetes?

See Quick Start

How do I use S3?

See Storage

Where is Nautilus Located?

Nautilus is a heterogeneous, distributed cluster, with computational resources of various shapes and sizes made available by research institutions spanning multiple continents! Check out the Cluster Map to see where the nodes are located.

I’m getting failed to refresh token, oauth2, server_error errors when trying to access the cluster with kubectl.

Get the config file again.

This happens too often, and I need to pull the config file over and over again.

You’re probably using kubectl concurrently (from several shells in parallel), which breaks the token update mechanism. Consider using ServiceAccounts for scripts.

My nautilus portal login is not working anymore

You should be consistent in which institution you choose from CILogon list. Even if UCSD is using Google for AD accounts, for CILogon Google and UCSD are two different institutions, which would result in two different accounts.

My pod is stuck Terminating.

This happens for 2 reasons: * The node running your pod went offline. The pod will get terminated once the node is back online * The storage attached to the pod can’t be unmounted.
In both cases you can ask an admin in rocketchat to look at your pod, or just wait for somebody to fix it.
DON’T USE kubectl delete --grace-period=0 --force to delete stuck pods

I tried to use nvprof in my GPU pod and got an error.

There is a vulnerability in NVIDIA drivers still not fixed, and this feature is disabled by default. Enabling it requires too much effort, so for now we keep it default. Hopefully it will be fixed soon.

How do I acknowledge support from PRP / Natulius in paper?

This work was supported in part by NSF awards CNS-1730158, ACI-1540112, ACI-1541349, OAC-1826967, the University of California Office of the President, and the University of California San Diego’s California Institute for Telecommunications and Information Technology/Qualcomm Institute. Thanks to CENIC for the 100Gpbs networks.