move "storage" section out of the extensions section since it is no longer a (meta) extension
@@ -1,43 +0,0 @@
|
||||
---
|
||||
title: "Ports"
|
||||
date: 2018-05-02T00:00:00+00:00
|
||||
weight: 41
|
||||
geekdocRepo: https://github.com/owncloud/ocis
|
||||
geekdocEditPath: edit/master/docs/extensions/storage
|
||||
geekdocFilePath: ports.md
|
||||
---
|
||||
|
||||
Currently, every service needs to be configured with a port so oCIS can start them on localhost. We will automate this by using a service registry for more services, until eventually only the proxy has to be configured with a public port.
|
||||
|
||||
For now, the storage service uses these ports to preconfigure those services:
|
||||
|
||||
| port | service |
|
||||
|-----------|-----------------------------------------------|
|
||||
| 9109 | health, used by cli? |
|
||||
| 9140 | frontend |
|
||||
| 9141 | frontend debug |
|
||||
| 9142 | gateway |
|
||||
| 9143 | gateway debug |
|
||||
| 9144 | users |
|
||||
| 9145 | users debug |
|
||||
| 9146 | authbasic |
|
||||
| 9147 | authbasic debug |
|
||||
| 9148 | authbearer |
|
||||
| 9149 | authbearer debug |
|
||||
| 9150 | sharing |
|
||||
| 9151 | sharing debug |
|
||||
| 9154 | storage home grpc |
|
||||
| 9155 | storage home http |
|
||||
| 9156 | storage home debug |
|
||||
| 9157 | storage users grpc |
|
||||
| 9158 | storage users http |
|
||||
| 9159 | storage users debug |
|
||||
| 9160 | groups |
|
||||
| 9161 | groups debug |
|
||||
| 9164 | storage app-provider |
|
||||
| 9165 | storage app-provider debug |
|
||||
| 9178 | storage public link |
|
||||
| 9179 | storage public link data |
|
||||
| 9215 | storage meta grpc |
|
||||
| 9216 | storage meta http |
|
||||
| 9217 | storage meta debug |
|
||||
@@ -17,9 +17,9 @@ To create a truly federated storage architecture oCIS breaks down the old ownClo
|
||||
|
||||
The below diagram shows the core concepts that are the foundation for the new architecture:
|
||||
- End user devices can fetch the list of *storage spaces* a user has access to, by querying one or multiple *storage space registries*. The list contains a unique endpoint for every *storage space*.
|
||||
- [*Storage space registries*]({{< ref "../extensions/storage/terminology#storage-space-registries" >}}) manage the list of storage spaces a user has access to. They may subscribe to *storage spaces* in order to receive notifications about changes on behalf of an end users mobile or desktop client.
|
||||
- [*Storage spaces*]({{< ref "../extensions/storage/terminology#storage-spaces" >}}) represent a collection of files and folders. A users personal files are contained in a *storage space*, a group or project drive is a *storage space*, and even incoming shares are treated and implemented as *storage spaces*. Each with properties like owners, permissions, quota and type.
|
||||
- [*Storage providers*]({{< ref "../extensions/storage/terminology#storage-providers" >}}) can hold multiple *storage spaces*. At an oCIS instance, there might be a dedicated *storage provider* responsible for users personal storage spaces. There might be multiple, either to shard the load, provide different levels of redundancy or support custom workflows. Or there might be just one, hosting all types of *storage spaces*.
|
||||
- [*Storage space registries*]({{< ref "./storage/terminology#storage-space-registries" >}}) manage the list of storage spaces a user has access to. They may subscribe to *storage spaces* in order to receive notifications about changes on behalf of an end users mobile or desktop client.
|
||||
- [*Storage spaces*]({{< ref "./storage/terminology#storage-spaces" >}}) represent a collection of files and folders. A users personal files are contained in a *storage space*, a group or project drive is a *storage space*, and even incoming shares are treated and implemented as *storage spaces*. Each with properties like owners, permissions, quota and type.
|
||||
- [*Storage providers*]({{< ref "./storage/terminology#storage-providers" >}}) can hold multiple *storage spaces*. At an oCIS instance, there might be a dedicated *storage provider* responsible for users personal storage spaces. There might be multiple, either to shard the load, provide different levels of redundancy or support custom workflows. Or there might be just one, hosting all types of *storage spaces*.
|
||||
|
||||
{{< figure src="/ocis/static/idea.drawio.svg" >}}
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ geekdocFilePath: ocis_individual_services.md
|
||||
|
||||
The docker stack consists of at least 24 containers. One of them is Traefik, a proxy which is terminating ssl and forwards the requests to oCIS in the internal docker network.
|
||||
|
||||
The other containers are oCIS extensions, running each one in a separate container. In this example oCIS uses its internal IDP [LibreGraph Connect]({{< ref "../../extensions/idp" >}}) and the [oCIS storage driver]({{< ref "../../extensions/storage/storagedrivers" >}}). You also can start more than one container of each service by setting `OCIS_SCALE` to a number greater than 1. Currently this won't scale all services, but we are working on making all service easily scalable.
|
||||
The other containers are oCIS extensions, running each one in a separate container. In this example oCIS uses its internal IDP [LibreGraph Connect]({{< ref "../../extensions/idp" >}}) and the [oCIS storage driver]({{< ref "../storage/storagedrivers" >}}). You also can start more than one container of each service by setting `OCIS_SCALE` to a number greater than 1. Currently this won't scale all services, but we are working on making all service easily scalable.
|
||||
|
||||
## Server Deployment
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ is also responsible for redirecting requests on the OIDC discovery endpoints (e.
|
||||
|
||||
Keycloak add two containers: Keycloak itself and a PostgreSQL as database. Keycloak will be configured as oCIS' IDP instead of the internal IDP [LibreGraph Connect]({{< ref "../../extensions/idp" >}})
|
||||
|
||||
The other container is oCIS itself running all extensions in one container. In this example oCIS uses the [oCIS storage driver]({{< ref "../../extensions/storage/storagedrivers" >}})
|
||||
The other container is oCIS itself running all extensions in one container. In this example oCIS uses the [oCIS storage driver]({{< ref "../storage/storagedrivers" >}})
|
||||
|
||||
## Server Deployment
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ geekdocFilePath: ocis_traefik.md
|
||||
|
||||
The docker stack consists of two containers. One of them is Traefik, a proxy which is terminating ssl and forwards the requests to oCIS in the internal docker network.
|
||||
|
||||
The other one is oCIS itself running all extensions in one container. In this example oCIS uses its internal IDP [LibreGraph Connect]({{< ref "../../extensions/idp" >}}) and the [oCIS storage driver]({{< ref "../../extensions/storage/storagedrivers" >}})
|
||||
The other one is oCIS itself running all extensions in one container. In this example oCIS uses its internal IDP [LibreGraph Connect]({{< ref "../../extensions/idp" >}}) and the [oCIS storage driver]({{< ref "../storage/storagedrivers" >}})
|
||||
|
||||
## Server Deployment
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ The storage extension wraps [reva](https://github.com/cs3org/reva/) and adds an
|
||||
|
||||
*Clients* will use the *Spaces Registry* to poll or get notified about changes in all *Spaces* a user has access to. Every *Space* has a dedicated `/dav/spaces/<spaceid>` WebDAV endpoint that is served by a *Spaces Provider* which uses a specific reva storage driver to wrap an underlying *Storage System*.
|
||||
|
||||
{{< figure src="/extensions/storage/static/overview.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/overview.drawio.svg" >}}
|
||||
|
||||
The dashed lines in the diagram indicate requests that are made to authenticate requests or lookup the storage provider:
|
||||
1. After authenticating a request, the proxy may either use the CS3 `userprovider` or the accounts service to fetch the user information that will be minted into the `x-access-token`.
|
||||
@@ -31,4 +31,4 @@ The bottom part is lighter because we will deprecate it in favor of using only t
|
||||
In order to reason about the request flow, two aspects in the architecture need to be understood well:
|
||||
1. What kind of [*namespaces*]({{< ref "./namespaces.md" >}}) are presented at the different WebDAV and CS3 endpoints?
|
||||
2. What kind of [*resource*]({{< ref "./terminology.md#resources" >}}) [*references*]({{< ref "./terminology.md#references" >}}) are exposed or required: path or id based?
|
||||
{{< figure src="/extensions/storage/static/storage.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/storage.drawio.svg" >}}
|
||||
@@ -10,7 +10,7 @@ geekdocFilePath: namespaces.md
|
||||
A *namespace* is a set of paths with a common prefix. Depending on the endpoint you are talking to you will encounter a different kind of namespace:
|
||||
In ownCloud 10 all paths are considered relative to the users home. The CS3 API uses a global namespace and the *storage providers* use a local namespace with paths relative to the storage providers root.
|
||||
|
||||
{{< figure src="/extensions/storage/static/namespaces.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/namespaces.drawio.svg" >}}
|
||||
|
||||
The different paths in the namespaces need to be translated while passing [*references*]({{< ref "./terminology.md#references" >}}) from service to service. While the oc10 endpoints all work on paths we internally reference shared resources by id, so the shares don't break when a file is renamed or moved inside a storage [*space*]({{< ref "./spaces" >}}). The following table lists the various namespaces, paths and id based references:
|
||||
|
||||
@@ -186,7 +186,7 @@ The current implementation in oCIS might not yet fully reflect this concept. Fee
|
||||
A storage *space* is a logical concept. It organizes a set of [*resources*]({{< ref "#resources" >}}) in a hierarchical tree. It has a single *owner* (*user* or *group*),
|
||||
a *quota*, *permissions* and is identified by a `storage space id`.
|
||||
|
||||
{{< figure src="/extensions/storage/static/storagespace.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/storagespace.drawio.svg" >}}
|
||||
|
||||
Examples would be every user's personal storage *space*, project storage *spaces* or group storage *spaces*. While they all serve different purposes and may or may not have workflows like antivirus scanning enabled, we need a way to identify and manage these subtrees in a generic way. By creating a dedicated concept for them this becomes easier and literally makes the codebase cleaner. A storage [*Spaces Registry*]({{< ref "./spacesregistry.md" >}}) then allows listing the capabilities of storage *spaces*, e.g. free space, quota, owner, syncable, root etag, upload workflow steps, ...
|
||||
|
||||
@@ -17,7 +17,7 @@ The current implementation in oCIS might not yet fully reflect this concept. Fee
|
||||
A *storage provider* manages [*resources*]({{< ref "#resources" >}}) identified by a [*reference*]({{< ref "#references" >}})
|
||||
by accessing a [*storage system*]({{< ref "#storage-systems" >}}) with a [*storage driver*]({{< ref "./storagedrivers.md" >}}).
|
||||
|
||||
{{< figure src="/extensions/storage/static/spacesprovider.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/spacesprovider.drawio.svg" >}}
|
||||
|
||||
|
||||
## Frontend
|
||||
@@ -28,7 +28,7 @@ The oCIS frontend service starts all services that handle incoming HTTP requests
|
||||
- *datagateway* for up and downloads
|
||||
- TODO: *ocm*
|
||||
|
||||
{{< figure src="/extensions/storage/static/frontend.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/frontend.drawio.svg" >}}
|
||||
|
||||
### WebDAV
|
||||
|
||||
@@ -109,4 +109,4 @@ It is used by the reva *gateway*
|
||||
to look up `address` and `port` of the [*storage provider*]({{< ref "#storage-providers" >}})
|
||||
that should handle a [*reference*]({{< ref "#references" >}}).
|
||||
|
||||
{{< figure src="/extensions/storage/static/storageregistry.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/storageregistry.drawio.svg" >}}
|
||||
@@ -17,4 +17,4 @@ The current implementation in oCIS might not yet fully reflect this concept. Fee
|
||||
|
||||
A storage *spaces registry* manages the [*namespace*]({{< ref "./namespaces.md" >}}) for a *user*: it is used by *clients* to look up storage spaces a user has access to, the `/dav/spaces` endpoint to access it via WabDAV, and where the client should mount it in the users personal namespace.
|
||||
|
||||
{{< figure src="/extensions/storage/static/spacesregistry.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/spacesregistry.drawio.svg" >}}
|
||||
|
Before Width: | Height: | Size: 30 KiB After Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 33 KiB After Width: | Height: | Size: 33 KiB |
|
Before Width: | Height: | Size: 81 KiB After Width: | Height: | Size: 81 KiB |
|
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 26 KiB |
|
Before Width: | Height: | Size: 30 KiB After Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 33 KiB After Width: | Height: | Size: 33 KiB |
|
Before Width: | Height: | Size: 192 KiB After Width: | Height: | Size: 192 KiB |
|
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 26 KiB |
|
Before Width: | Height: | Size: 116 KiB After Width: | Height: | Size: 116 KiB |
|
Before Width: | Height: | Size: 20 KiB After Width: | Height: | Size: 20 KiB |
@@ -64,12 +64,12 @@ Technically, this means that every storage driver needs to have a map of a `uuid
|
||||
## Technical concepts
|
||||
|
||||
### Storage Systems
|
||||
{{< figure src="/extensions/storage/static/storageprovider.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/storageprovider.drawio.svg" >}}
|
||||
|
||||
A *storage provider* manages multiple [*storage spaces*]({{< ref "#storage-space" >}})
|
||||
by accessing a [*storage system*]({{< ref "#storage-systems" >}}) with a [*storage driver*]({{< ref "#storage-drivers" >}}).
|
||||
|
||||
{{< figure src="/extensions/storage/static/storageprovider-spaces.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/storageprovider-spaces.drawio.svg" >}}
|
||||
|
||||
## Storage Space Registries
|
||||
|
||||
@@ -81,7 +81,7 @@ It is a tree of [*resources*]({{< ref "#resources" >}})*resources*
|
||||
with a single *owner* (*user* or *group*),
|
||||
a *quota* and *permissions*, identified by a `storage space id`.
|
||||
|
||||
{{< figure src="/extensions/storage/static/storagespace.drawio.svg" >}}
|
||||
{{< figure src="/ocis/storage/static/storagespace.drawio.svg" >}}
|
||||
|
||||
Examples would be every user's home storage space, project storage spaces or group storage spaces. While they all serve different purposes and may or may not have workflows like antivirus scanning enabled, we need a way to identify and manage these subtrees in a generic way. By creating a dedicated concept for them this becomes easier and literally makes the codebase cleaner. A [*storage space registry*]({{< ref "#storage-space-registries" >}}) then allows listing the capabilities of [*storage spaces*]({{< ref "#storage-spaces" >}}), e.g. free space, quota, owner, syncable, root etag, upload workflow steps, ...
|
||||
|
||||