Merge FairSync and InCommon or make it interoperable

@how which means “AMA” ?

Ask me Anything :wink:

1 Like

I did not really understand that rails stuff ,-(

I like that linkedData stuff somehow, but have currently no idea how to use it and/or how to define a specification.
For our Minimum Viable Product (MVP) we decided to use a simple openAPI specification with a simple datamodel that is based on the openfairdb API.
This is what all parties understand and are able to implement for now. Our Goal with the MVP is to work and share data in a fast way.

We have stuff like linked data in mind, but not for our MVP. However, it would be nice, to have an openAPI Model, that is mapable on your linked data model.
We will sure not have a common understanding, a defined spec and working plattforms using that spec in the next weeks.

However, we want to understand your thoughts and specification.
For me personally a call is fine, but my personal english communiucation skills are not the best.
I’ve often have trouble to understand the german guys in the video conference :wink:

I would like to use your api like
curl -H ‘Authorization: Token 23d8009f-f2ec-44b6-bcbe-58ad1d3af10b’
and try to understand whats going up. But i didn’t understand that documentation https :// and cannot fin curl commands for other resources like places, locations, events, entities, ?!

Hi @naturzukunft,

Great project you are working on!

Just had a look at the FairSync API Specification. From what I understand that primarily defines a minimal common data model that should be implemented/implementable by all connected maps and platforms. Is that right?

There is a well-defined and widely-used data model to describe these kind of things: I think the data model described in the FairSync API specification can be mapped to schema-org.

The advantage of using schema-org as data model is that it is already widely-used (e.g. by GoGoCarto and radar.squat and many more). A lot of existing data does not have to be converted, just fetched.

Furthermore schema-org defines a Linked Data vocabulary and can be directly used over protocols such as ActivityPub. As an example: I was able to import data from GoGoCarto and radar.squat into a ActivityPub client with zero coordination with the respective projects.

I think there would be immense advantage of using an already widely-used data-model as the “common” model.

Connectors from this data model to platforms that do not use this data-model (e.g. Wordpress, Drupal, Android, iCal ,etc.) would be extremely valuable (I believe beyond the initial use-case of of FairSync).

Also am happy to participate in an AMA (auch in Deutsch).


Well, I sent you a few links, including a demo that links to 3 papers who really should read. I understand your MVP is something you have in mind for a “simple” and “fast way” to share data, but you must understand as well that what @pukkamustard proposes is already working in the demo – this time I link to the associated blog article that clarifies the approach of using Linked Data and ActivityPub for content curation – and implementing something too simple may incur more work for everyone in the future. Fast is not necessarily a good approach to interoperability design.

I’m not certain to understand your motivation and goals with regard to merging FairSync and IN COMMON or making them interoperable if you’re not interested in understanding what we have in mind. I can only support @pukkamustard in saying that we’re happy to discuss a common model and indeed our serializations should be compatible with definitions to some extent to facilitate interoperability – especially if we’re talking about data synchronization among distinct software approaches.

Maybe we could start with text-form questions so we can prepare a first voice meeting in the coming days. I could for example start from your Place model and compare it with IN COMMON’s and Schema. That could help bootstrap a common understanding of what we’re dealing with. What do you think?

Places and Entities are Resources. You can consume them like any other resources – except you can only write them via their own route (see below[1]). Events are not yet implemented. All routes are defined in config/routes.rb, here is the details[2]:

$ rails routes
         Prefix Verb   URI Pattern                Controller#Action
           root GET    /                          api#index
 api_taxonomies GET    /taxonomies(.:format)      api/v0/taxonomies#index
                POST   /taxonomies(.:format)      api/v0/taxonomies#create
   api_taxonomy GET    /taxonomies/:id(.:format)  api/v0/taxonomies#show
                PATCH  /taxonomies/:id(.:format)  api/v0/taxonomies#update
                PUT    /taxonomies/:id(.:format)  api/v0/taxonomies#update
                DELETE /taxonomies/:id(.:format)  api/v0/taxonomies#destroy
 api_categories GET    /categories(.:format)      api/v0/categories#index
                POST   /categories(.:format)      api/v0/categories#create
   api_category GET    /categories/:id(.:format)  api/v0/categories#show
                PATCH  /categories/:id(.:format)  api/v0/categories#update
                PUT    /categories/:id(.:format)  api/v0/categories#update
                DELETE /categories/:id(.:format)  api/v0/categories#destroy
   api_sections GET    /sections(.:format)        api/v0/sections#index
                POST   /sections(.:format)        api/v0/sections#create
    api_section GET    /sections/:id(.:format)    api/v0/sections#show
                PATCH  /sections/:id(.:format)    api/v0/sections#update
                PUT    /sections/:id(.:format)    api/v0/sections#update
                DELETE /sections/:id(.:format)    api/v0/sections#destroy
api_collections GET    /collections(.:format)     api/v0/collections#index
                POST   /collections(.:format)     api/v0/collections#create
 api_collection GET    /collections/:id(.:format) api/v0/collections#show
                PATCH  /collections/:id(.:format) api/v0/collections#update
                PUT    /collections/:id(.:format) api/v0/collections#update
                DELETE /collections/:id(.:format) api/v0/collections#destroy
     api_agents GET    /agents(.:format)          api/v0/agents#index
                POST   /agents(.:format)          api/v0/agents#create
      api_agent GET    /agents/:id(.:format)      api/v0/agents#show
                PATCH  /agents/:id(.:format)      api/v0/agents#update
                PUT    /agents/:id(.:format)      api/v0/agents#update
                DELETE /agents/:id(.:format)      api/v0/agents#destroy
      api_roles GET    /roles(.:format)           api/v0/roles#index
                POST   /roles(.:format)           api/v0/roles#create
       api_role GET    /roles/:id(.:format)       api/v0/roles#show                                                                                                                                                                                                       [1/253]
                PATCH  /roles/:id(.:format)       api/v0/roles#update
                PUT    /roles/:id(.:format)       api/v0/roles#update
                DELETE /roles/:id(.:format)       api/v0/roles#destroy
      api_users GET    /users(.:format)           api/v0/users#index
                POST   /users(.:format)           api/v0/users#create
       api_user GET    /users/:id(.:format)       api/v0/users#show
                PATCH  /users/:id(.:format)       api/v0/users#update
                PUT    /users/:id(.:format)       api/v0/users#update
                DELETE /users/:id(.:format)       api/v0/users#destroy
       api_maps GET    /maps(.:format)            api/v0/maps#index
                POST   /maps(.:format)            api/v0/maps#create
        api_map GET    /maps/:id(.:format)        api/v0/maps#show
                PATCH  /maps/:id(.:format)        api/v0/maps#update
                PUT    /maps/:id(.:format)        api/v0/maps#update
                DELETE /maps/:id(.:format)        api/v0/maps#destroy
  api_positions GET    /positions(.:format)       api/v0/positions#index
                POST   /positions(.:format)       api/v0/positions#create
   api_position GET    /positions/:id(.:format)   api/v0/positions#show
                PATCH  /positions/:id(.:format)   api/v0/positions#update
                PUT    /positions/:id(.:format)   api/v0/positions#update
                DELETE /positions/:id(.:format)   api/v0/positions#destroy
  api_locations GET    /locations(.:format)       api/v0/locations#index
                POST   /locations(.:format)       api/v0/locations#create
   api_location GET    /locations/:id(.:format)   api/v0/locations#show
                PATCH  /locations/:id(.:format)   api/v0/locations#update
                PUT    /locations/:id(.:format)   api/v0/locations#update
                DELETE /locations/:id(.:format)   api/v0/locations#destroy
  api_addresses GET    /addresses(.:format)       api/v0/addresses#index
                POST   /addresses(.:format)       api/v0/addresses#create
    api_address GET    /addresses/:id(.:format)   api/v0/addresses#show
                PATCH  /addresses/:id(.:format)   api/v0/addresses#update
                PUT    /addresses/:id(.:format)   api/v0/addresses#update
                DELETE /addresses/:id(.:format)   api/v0/addresses#destroy
     api_emails GET    /emails(.:format)          api/v0/emails#index
                POST   /emails(.:format)          api/v0/emails#create
      api_email GET    /emails/:id(.:format)      api/v0/emails#show
                PATCH  /emails/:id(.:format)      api/v0/emails#update
                PUT    /emails/:id(.:format)      api/v0/emails#update
                DELETE /emails/:id(.:format)      api/v0/emails#destroy
      api_links GET    /links(.:format)           api/v0/links#index
                POST   /links(.:format)           api/v0/links#create
       api_link GET    /links/:id(.:format)       api/v0/links#show
                PATCH  /links/:id(.:format)       api/v0/links#update
                PUT    /links/:id(.:format)       api/v0/links#update
                DELETE /links/:id(.:format)       api/v0/links#destroy
     api_phones GET    /phones(.:format)          api/v0/phones#index
                POST   /phones(.:format)          api/v0/phones#create
      api_phone GET    /phones/:id(.:format)      api/v0/phones#show
                PATCH  /phones/:id(.:format)      api/v0/phones#update
                PUT    /phones/:id(.:format)      api/v0/phones#update
                DELETE /phones/:id(.:format)      api/v0/phones#destroy
  api_resources GET    /resources(.:format)       api/v0/resources#index
   api_resource GET    /resources/:id(.:format)   api/v0/resources#show
   api_entities GET    /entities(.:format)        api/v0/entities#index
                POST   /entities(.:format)        api/v0/entities#create
     api_entity GET    /entities/:id(.:format)    api/v0/entities#show
                PATCH  /entities/:id(.:format)    api/v0/entities#update
                PUT    /entities/:id(.:format)    api/v0/entities#update
                DELETE /entities/:id(.:format)    api/v0/entities#destroy
     api_places GET    /places(.:format)          api/v0/places#index
                POST   /places(.:format)          api/v0/places#create
      api_place GET    /places/:id(.:format)      api/v0/places#show
                PATCH  /places/:id(.:format)      api/v0/places#update
                PUT    /places/:id(.:format)      api/v0/places#update
                DELETE /places/:id(.:format)      api/v0/places#destroy
     api_things GET    /things(.:format)          api/v0/things#index
                POST   /things(.:format)          api/v0/things#create
      api_thing GET    /things/:id(.:format)      api/v0/things#show
                PATCH  /things/:id(.:format)      api/v0/things#update
                PUT    /things/:id(.:format)      api/v0/things#update
                DELETE /things/:id(.:format)      api/v0/things#destroy

  1. /resources are for reading any type of resource, while /place, /entity, etc. are for writing. ↩︎

  2. :format is json, it could be xml, etc. but we’re focusing on JSONAPI. ↩︎

1 Like

Hi @pukkamustard, for now there are only 2 to 4 platforms that will using this specification. For the MVP it’s only weChange and KVM. Thats also a reason, why we try to keep the interface simple.

I was analysing our properties of the interface this morning and wrote some things about that session here: What do you mean?

The currently under construction specification will be a first draft to get quickly a running interface between weChange and KVM. And while discussing we learn a lot about the thoughts and useCases of that two platforms. Currently we discuss if the Object is a place or a organisation ,-)


Continuing the discussion from Merge FairSync and InCommon or make it interoperable:

From what I can read it seems that you are trying to start again from scratch while many things have already been resolved and are working.

I can only insist on what has been said before, we all agree we should have a common understanding and spec, therefore it is essential that you look at the work that has been done, already implemented and functional.

The choice to follow has been discussed collectively for its large scope of existing adoption, link data and activity pub compatibility. I think it has also the very important quality to allow us to skip endless discussion about specifications, please look at it .

For the purpose of getting things working it feels the path of discussing yet another spec model is really not the way to go.

We have a bit of a bad experience here with long discussions about specifications, so we want to get started sooner to learn. However, this does not mean that we do not look at other models and interfaces during this time. I realize that part of the work has to be thrown away later. But the preparation of the data in the platforms must be made independent of the definition of the interface. And changing the syntactic of an interface shouldn’t be a big problem if the data and semantic is similar.

If you understood that we are not interested in understanding your interfaces, then you completely misunderstood. We are very interested in it.

You’re not talking about Activity Stream Vocabular, why?

For me it would be ok if we stay with the text form for a few more days. A lot is going through my head right now and I also have other projects.

1 Like

Hi @natasha

From what I can read it seems that you are trying to start again from scratch while many things have already been resolved and are working.

Not 100% we concentrate on an existing interface of KVM and an almost similar interface of and on discussion we had now for ~ 1 year with a lot of breaks. So from that point of view its more a common thing between 2-3 platforms.

I can only insist on what has been said before, we all agree we should have a common understanding and spec, therefore it is essential that you look at the work that has been done, already implemented and functional.

This is what we do ?! The one does not exclude the other.

For the purpose of getting things working it feels the path of discussing yet another spec model is really not the way to go.

Maybe you are right, but i do now defining and implementing interfaces between systems since ~20 years. And that we get a common interface (with the complexity we are talking about) over all existing systems is more a dream then reality. But from time to time dreams are getting reality and that should be our main goal. Allow diversity next to the great interface. There will always be many other systems that have to be adapted. we shouldn’t stop the earth while we’re looking for the great solution

Thank you very much for that discussion! I’m sure there is a lot to learn for me.


We’ve been having this conversation for a long time, so I’m going to comment on your document:

Place / POI

Our Place model serializes as a Place in ActivityStreams, but the underlying model is more complex and includes specific input validation to incur high precision of geo-location across the globe, RFC- or ISO-compliant data fields for addresses, phone numbers (e164), email addresses, URLs, etc. The idea is to encourage a strong commitment to enriching the data among applications.


Identifiers are critical, especially if we’re sharing data between uncoordinated applications. That’s why we use Random UUIDs (v4) for our resources to ensure we can have globally unique identifiers (either by using these UUIDs directly, or in the form or an URI<UUID> or, as in ERIS, using a content-addressable URN: urn:sha256:7d49499925933ab8b96f5808c904d5797892e39219ce88dc6f342503804b0743) since the whole point of IN COMMON is to offer a global cache for any public resource that other applications can use to discover and interact with original data[1]).


We also postponed the implementation of versioning since it will depend on the underlying data model ; e.g., if we deal with immutable data, then versioning is about keeping track of the tree, which may in turn be a simple property of the data model.


This is a standard field that applies to the creation date of the record. We also maintain updatedAt since we’re currently using a relational model, but as with versioning, this could become useless if we’re dealing with immutable data.


Indeed, this amounts to a tombstone. In our case, we change the object type.


We do not use specific license on Resource objects. We’re dealing with open data exclusively, so the license is not very important as such. If we need to accommodate for specific terms, we prefer not tying it to the data model. Our license for data is currently documented in our Charter but not so specific – there’s a need for collective coordination and it should not be coupled with data structure IMO at this point.


I’m not sure I understand the term of source here: we’re using the concept of ‘Agents’ responsible for data maintenance – and we do not trust sources as such. We do not have “owners” unless we consider collective ownership as responsibility ; so an Agent can be contacted by email. A data point may have associated phone numbers, but they’re no more “data owners” than the rest of the public.


We use name. It’s an arbitrary UTF-8 string. The limit is currently not enforced but I think it’s about 128 chararacters.


We use a summary and a description: the difference is that summary is shorter (~134 characters, according to SEO) and text-only, while description is longer and in Markdown.


We have postponed the definition of tags since OSM is redefining them, and all applications have different approaches. But tags are definitely not a property, rather a relation, so they can be serialized but this will be done later when we collectively figure a useful way to deal with them.


Sounds like tags… Maybe it related to our Taxonomy tree with Categories and Sections.

image and logo

We did not define anything for this yet. This can be handled as a ResourceProperty (e.g., ResourceImage) to be defined. Images can be inserted via the Markdown description.


We use a specific Position model for this which supports GeoJSON types, high-precision for latitude and longitude (7 decimals each) and, due to PostGIS storage, supports geographic and geometric calculations. The serialization then goes beyond the spec :slight_smile: since we can support arbitrary positions. Also, having positions decoupled from the data helps sorting resources at the same location.

address, legalAddress, contacts

We use ResourceAddresses, ResourcePhones, ResourceEmails, etc, which can then be serialized according to the wanted representation. Each model strives to implement proper standard validation.

opening hours

We do not have this concept yet: it can be added into description and may be irregular, so we prefer postponing this kind of information and delegating it to humans instead of making things overly complicated to please automation – since automation is kind of opposite to technical progress.


We maintain a meta JSON field for resources to account for incompatible data points in our model. Incompatible data can then be reviewed, and adaptors eventually created for known patterns. The need for this approach comes from the experience of incompatible and invalid data, e.g., PhoneNumber = On Mondays, call Sally at +1 234 567 8900, on Wednesday call Jack at +2 345 678 901. The meta field allows specific applications to transmit extra data relevant to them without losing information, and slowly working collectively towards harmonization and adoption of widely used common fields.

  1. Well, not so ‘whole’ as in unique since we’re also dealing with collective maintenance of such data, and uniqueness and caching could be done with many other approaches, like IPFS. ↩︎

Heh, this is why we have IN COMMON, with precise data models, and no interface. :wink:

Well, the interfaces are interchangeable, and secondary in our approach. We’re more interested in making the data interoperable and solid on the long term. That’s why we started with input validation, figuring ways to avoid abuse of field definitions like the example PhoneNumber I gave above, and turned towards Linked Data recently because it makes things easier to share on the global scale. We’re a step closer to standardization, and we’re also thinking about keeping things simple, accessible and human – in the sense that collective organizations should be in charge of the data, not machines: machines are there to be bent to our actual needs, not the other way around.

Because we’re currently interested in the data model, and ActivityStreams is a step further, when we need to share the data. We already committed to support ActivityPub (and SOLID), so it’s not like we’re foreign to these approaches, we’re factoring them in, but they appear in the serialized (“flattened”) view of the data. I guess that’s the point of @pukkamustard when he says that he could include GoGoCarto and Radar data without coordination with them: once we have the data straight, serialization should be straightforward, and enriching the data is the harder step.

ok, a lot of stuff. i see this will end in nice discussions between our platforms :wink:

just a few things:

I wonder how you want to deal with it if a system does not implement your interface and it has to be adapted.
I assume that a system has a unique url when it is on the internet. therefore / would also be unique, right?
If a platform has an internal id that doesn’t match your spec, how should an adapter handle it?

@wellemut topic for you :wink:

Thats a difficult ongoing discussion we currently have between KVM and WeChange. Keeps interesting

@wellemut so doing everything with tags is not possible :wink: :wink:

the idea is that there will be tags for which someone is responsible. e.g. issues certificates for organizations and an organization may only have the tag if the verifies this.
Possibly. day is the wrong way here!
@wellemut you should do that discussion :wink:

The reason for this was that there may be special agreements between certain platforms. In order to be able to transfer data to these agreements without having a common specification. A kind of back door. sure not a good idea and design. the openinghours will surely be placed in such fields :wink:

i think we have different platforms or organizations in mind. we are currently importing csv files. we are far from discussing their data models with organizations. we are rather happy to receive data in any form. but maybe that’s just my feeling

Hi Fredy,

I think that is a very good first step. Understand the data in whatever form it is available. I believe this is also what INCOMMON has been extensively doing.

I think this might be a confusion about Linked Data. Linked Data (or RDF - to be precise) defines the shape of data in general (as a graph).

Vocabularies can be used to describe meaningful concepts in RDF by defining unique names (URIs) for abstract things.

Vocabularies can then be used to describe real things (e.g. an Event can be described with Schema-org vocabulary).

Different vocabularies capture different aspects of realities. For example: ActivityStreams Vocabulary only describes Social Network like interaction between actors. MOAC defines a vocabulary for data relevant for crisis response. ESSGlobal was an effort to create a vocabulary for SSE. Schema-org breaks the idea slightly by breaking bounds and encompassing diverse concepts (everything that FANG understands).

All vocabularies are interoperable. That is you can describe something with terms from schema-org and from ESSGlobal and at the same time share it with terms from ActivityStreams. I believe that is a power of Linked Data - composition of vocabularies beyond intended scope.

So discussion about how to describe the data to be synced by FairSync does not have to be/should not be limited to ActivityStreams.

I’m afraid the whole Linked Data idea is slightly complicated. But I believe it is very simple at it’s core and I’m very happy that we are having this discussion.

Maybe this describes something that is sometimes called “provenance”. E.g. what is the source of this data?

There is a widely-used vocabulary (dcterms) which defines a source property. Maybe this could be used?

There is also the (way more complicated) Provenance Ontology. I feel it makes things a bit too complicated…but worth another look.

I’ve sketched out a simple vocabulary for caching that also is able to describe the “source of cached content”.

This is a draft to discuss IN COMMON identifiers, in response to Merge FairSync and InCommon or make it interoperable:

1.0 – Context

Explain why we need them, and how they came to be.

1.1 Unstable Identifiers

The OSM identifier issue: one cannot attach resources to stable geographical positions in OSM.

1.2 Uniform Resources

Reminder about the URI, URL, and URN

1.3 UUIDs

Introducing UUIDv4

2.0 – IN COMMON Identifiers

What are they and how they’re constructed?

2.1 Namespaces and Random UUID Assignment

Describe routes and UUID usage for IN COMMON resources

2.2 Content-Addressed Resources

Describe work on ERIS and cryptographic hash of URIs

2.3 Standardization Effort

Describe URN Registration and RFC work

Original reply draft…

There are several ways to deal with this. The original approach was to consider using random UUID (v4) for all shared resources, generating consistent URLs in the form<UUID>. If the original data provides an UUID, we use it, otherwise we generate one.

At some point I considered only storing URIs – which matches a Linked Data approach – but the whole point of using UUIDs attached to resources was to fill the gap with OSM data that change identifiers with versions: since OSM has many identifiers for a single resource, or as many identifiers as there are phases of a single (technical) individual, it does not help with the purpose of keeping track of the Commons, where, e.g., an organization changes addresses and phone numbers, while remaining the same Entity.

The more we’re moving towards Linked Data, the more URIs become important. Ultimately, we should be able to support “IN COMMON like” identifiers: https://<domain>/<type>/<uuid> and any form of compatible URIs (e.g., a resource from Communecter using their internal URI identifier) – with ERIS, previously mentioned, content-addressable resources would use a hash of the corresponding URI and produce a URN that would then be useful without any notions of domain: at this point, “IN COMMON like” identifiers would be like ns URIs and not necessarily match actual URIs, or more concretely, one would be able to type: https://incommon-api.example/resource/8cea2370-daca-498f-82c3-79e8950a11f1 and obtain a list of URNs and URIs to interact with the wanted resource – IN COMMON would simply help dereference URIs to shared Commons resources.

I would expect platforms not to expose their internal identifiers. In any case, we would store the external URI and provide a local UUID, e.g., if a system provides https://map.example/foo/map/organization/234, we would provide and, and eventually some capabilities URLs referencing the a7eb8f10-4bef-4b67-8062-9baf78429e6f UUID. Each version of this resource would be addressable with urn:sha256:<hash of> and dereference to the correct (local and remote) URIs and versions.

In case we’re simply reading the remote resource, we’d probably redirect to its original URI unless the requested format is explicitly JSON – but I’m already thinking ahead, when a

@how Wow, I misunderstood IN_COMMON. It looks like IN_COMMON is the central data manager for all platforms. I had hoped it’s more of an API specification and all platforms that adhere to this API can exchange data with each other. But when IN_COMMON creates its own UUIDs for data from other platforms and stores the mapping, that sounds like centralism. I’m confused.

I confirm you misunderstood IN COMMON: the objective is to have distributed databases, totally the opposite of a centralist approach. We use UUIDs to enable other applications to reuse these identifiers and ensure we’re talking about the same resources: they’re random, they’re simply assigned to a resource if the incoming resource does not already provides one.

Moreover, URIs are constructed as https://<domain>/<type>/<uuid> where all three are variable. We provide as a global cache for such resources, so that when any application receives an Entity with UUID 831cc02f-4bb5-4b1c-ad6f-5869e3f3a3e8, it can check information on whether it is known, who provides it, if there are discussions attached or conflicting views, who administers the resource, etc.

Eventually we’ll have local resources mapping lists of content-addressable URNs, e.g., using ERIS. You really should read up before jumping to weird conclusions about centralization, where our goal is to enable peer-to-peer, uncoordinated data synchronization of the Commons, locally controlled by commoners.

ok, i don’t quite understand what “We provide as a global cache for such resources”, excuse my last comment. Is it necessary to understand everything before I look at the format of the data exchange? I would like to understand how a platform has to provide data for IN COMMON and how it can receive data from IN COMMON.

Just had another look at https ://

I was expecting some kind of context, that describes the type of the data. Did I misunderstand that

	"data": {
		"id": "bcd7894b-650b-482a-b186-123b0ec33c6c",
		"type": "resources",
		"attributes": {
			"name": "La Maison Vert et Bleue (MVB)",
			"summary": null,
			"description": "La Maison verte et bleue (MVB) encourage, en collaboration avec la Commune d’Anderlecht, la préservation de la vocation rurale de Neerpede à travers le développement de projets qui valorisent son potentiel économique, social et/ou environnemental. La MVB désire contribuer au développement d’un pôle rural régional bruxellois (voire transrégional). Au cœur d’un dispositif qui a tous les atouts nécessaires pour l’émergence d’une filière résiliente en alimentation, nous travaillons avec différents partenaires pour valoriser les différents maillons de la chaîne (Production, Transformation, Distribution, Consommation et Economie locale, Recyclage et Compost). La MVB organise également des activités de sensibilisation à l’alimentation durable et au respect de la biodiversité.\r\n\r\nActivités:\r\n-Renseignement pour les futurs Indépendants.\r\n-Salle événementielle\r\n-...",
			"main_address": null,
			"main_email": null,
			"main_link": null,
			"main_phone": null,
			"type": "Resource",
			"uuid": "bcd7894b-650b-482a-b186-123b0ec33c6c",
			"visible": true,
			"created_at": "2016-10-13T15:08:04.851Z",
			"updated_at": "2018-10-27T18:46:00.629Z"
		"relationships": {
			"agent": {
				"meta": {
					"included": false
			"locations": {
				"meta": {
					"included": false
			"addresses": {
				"meta": {
					"included": false
			"emails": {
				"meta": {
					"included": false
			"links": {
				"meta": {
					"included": false
			"phones": {
				"meta": {
					"included": false
			"collections": {
				"meta": {
					"included": false
		"links": {
			"self": ""
		"meta": {
			"current": {
				"agent_name": "IN COMMON Agent",
				"agent_uuid": "7218b579-a1b4-4318-9fa8-e8dbff2a61c0",
				"user_uuid": "6c8be57b-5730-428d-bdf2-ff9bcb37d7fd",
				"restricted": true,
				"anonymous": true,
				"locale": "en"
	"links": {
		"agent": ""
	"jsonapi": {
		"version": "1.0"