Archives


DATABASE WORKING GROUP, THURSDAY, 18TH NOVEMBER 2010, 4:00 P.M.:

CHAIR: So, after some administrative delay, welcome to the beginning of the database Working Group. Our colleagues from the RIPE NCC were already chiming the bells. So let's see how many people will eventually find their way to the Working Group into the this room. We have got quite a bit of list of items. We'll get started with the administrative aspect.

Welcome, my name is Wilfred Woeber, I am working for Vienna University and the Austrian research education network. I am one of the co?chairs for the database Working Group. The other active co?chair, Nigel Titley, is at the corner at the front of the audience. We have to find a scribe. Nigel has already volunteered to do that. My sincere thanks for doing that almost on a regular basis whenever there is a chance, thank you very much.

For the administrative things, this session is, like all the others, webcast, so in case you are walk up for your contribution to the microphones, please state your name and try to make yourself known to our audio technician.

The next thing to do is to finalise the agenda. There is a version 4 draft agenda that was circulated yesterday in the afternoon. There is also a version available on website which includes a couple of URLs, a couple of pointers to supporting material. I have removed that from this PowerPoint slide here just to guide us through the list.

Is there anything else in addition to the stuff that is already on the draft agenda that you want to bring up? Anything you want to see removed? No? Okay. Then this is the agenda.

As usual and in all of the Working Groups, there is also the any other business for leftovers to pick up the bread crumbs.

Next thing is approval of the minutes from the previous meeting. They have been circulated on the mailing list. They have been available, I don't know how long, but it's for quite a while. They have been available also on website for the Working Group. Is there anything you want to submit as last minute changes? No? Okay. So then these are formally the records of the previous meeting.

The next thing to do is to walk up briefly through the action list. Nigel already told me that this is going to be a quick exercise. There are only a few items on the list. And some of them will actually be covered by presentations by our colleagues from the RIPE NCC, so you want to take us through the list?

NIGEL TITLEY: This will only take about 30 seconds. We have one historic action point. Wilfred. Which is to refer a question about adding this. I was talking to Wilfred about this earlier and we think it's overtaken by events.

WILFRED WOEBER: I agree, and I think, double checking with the folks from the database services at the NCC, this is actually overtaken by events because we now have mandatory maintainers. And the mechanisms that we wanted to implement are actually there by way of the maintainer infrastructure, is that correct? I see nodding. Okay. Denis?

AUDIENCE SPEAKER: Denis Walker, RIPE NCC. We have added mandatory maintainer to the personal role object which means you can maintain the personal object itself. Maybe this was referring more to requiring authorisation to add a reference to a personal object.

NIGEL TITLEY: That's correct it was.

AUDIENCE SPEAKER: In a way it's completely pointless because if you require authorisation to add a reference to a person and object, you can just create an object of the person and object and reference that one, so, it wouldn't actually achieve anything.

WILFRED WOEBER: Thank you Denis. So I guess we can just remove that from the action list as overtaken by events.

NIGEL TITLEY: Yes, that's what I would suggest. Okay. So that's the historic one. And then there are two left over from last meeting. Which was discussion of the, and this was on everybody in the database Working Group, discuss the pinc?C proposed attribute. The pinc?C attribute is something that you add to an INET NUM or an Inet6num object and it gives you an IP address that you can ping for debugging, trouble shooting problems in accessing that particular address space. I have haven't seen any discussion on the mailing list about this, have you Wilfred?

WILFRED WOEBER: I presume this is covered by the presentation, by Paul palsy, regarding stuff being test driven in a pilot, correct? Yes, again nodding.

NIGEL TITLEY: This will be dealt with later and I am, the second action point will also be covered by the RIPE NCC's presentation further on I should imagine.

CHAIR: Yes? Perhaps not? No. Okay. So perhaps it isn't. So that one is ongoing then. And that's it. Absolutely.

WILFRED WOEBER: Thank you Nigel, thank you very much for keeping track of that. Which takes us to the next item on the agenda, and just for scheduling reasons, because the colleague has to move over to competing Working Group which is already going on for awhile I suppose, so just please go ahead.

RICHARD BARNES: My name iS Richard Barnes, I hope you'll forgive me for doing a little bit quick because I am supposed to be sharing the other Working Group.

So, the problem, the question I want to ask today is: Do we want to put some information in the database to help support IP location stuff? The background on this is that convent providers, a lot of applications nowadays increasingly want to incorporate geolocation in their service delivery. Things like figure out what language Google should be delivered in or Yahoo or whatever set you are interested in delivering the right ads for someone's location context.

So, right now the way they do that there is a lot of proprietary things out there so people using, calling registration information from the WHOIS using that as a proxy for geolocation, latency mapping, things hike that. Which works well a lot of the time but it has pit falls. Like we are here and if you go to Yahoo, this is what they think you should get. So we get the Netherlands version of this because the addresses we are using here are registered in Amsterdam and several of the location databases infer that to mean that the computers are actually located in Amsterdam even though they are actually in Rome right now. We should actually be getting, we are in Rome right now we should be getting the Italian version of Yahoo. I guess ?? given the international nature of this audience, maybe we should be getting something else, but, you know, one would presume if, they new that addresses were in Rome they'd deliver the Italian version of the page.

The results of this, these gaps in understanding this is sort of heuristic approach to geolocation is on a roughly monthly basis you see e?mails to lists like NANOG and AP ops requesting ways to fix this. So can somebody point me to the right way to, the right contact at Google to update their location database because my customers are getting the wrong thing. When services go wrong, ISPs get support calls and they try and figure out how to fix what went wrong. So you have fixing one by one fixing Google, fixing Bing and Yahoo and all this phishing around or the fight contact to get to.

When they fail you get support calls, you have to fix things and you have to fish around and figure out how to fix this. Maybe there is a path to make this better. So, we have sort of a good incentive alignment here, the amountcations content providers within a to have the right geolocation. ISPs want the application to say work right and the ISPs in a lot of cases have information about where subscribers are. You have got databases, tables that tell you who is on what physical line. So, you know, maybe we just need a way for ISPs who want the content to work to provide this geolocation information up to the content providers who need it to make things work.

So, why don't we have a registry for this? This is how we traditionally distribute information about net blocks. We already tell people a lot of information about IP addresses through things like the WHOIS database, the RIPE database. So, you know we tell them things like what organisations IP addresses are registered to, and, you know, who to contact if abuse comes out, referring to the previous Working Group. So, you know we could regard geolocation information as just another data element in this and, you know, putting the stuff in the database would allow ISPs to provide real authoritative data, registered geolocation information instead of things like registration contacts and things like that which would allow these content providers to use real data for these geolocation based services instead of having to make guesses. This is also a complement to other techniques. It bears noting. This is not a complete solution to geolocation you know. You might have information for a country level for an ISP at least get you the right version of Google but service that is might want precise things might also want access to a GPS on a device, there is a space location, this address is a subset.

An example of what we might to do so of this is add some location fields. Have a located attribute in an INET NUM record and, you know, have a data structure you can put in the database which describes geolocation. You know, let's reuse the existing stuff, the same database, just adding fields to it. And, you know, you can use a lot of same provisioning systems you have got already. But, you have to update all of those things at the same time. And you know the location format doesn't have a whole lot of structure to it. Someone that wanted to make automated use of that would have to pars that out somehow and make some sense of it.

There is also a major problem with this in terms of privacy. By introducing, sticking this up directly in the WHOIS database you give everyone access to that level of geolocation information, regardless of who that he maybe and what as an ISP might have with them. There is sensitivity around location information of course. Of course location, what country I am in that may or may not be so sensitive. Maybe if I am in the ASN Marino or the VAT can, that's pretty precise, things like country level don't have a lot of the sensitivity. If you talk about precise things which are valuable to these contents providers, you start getting quickly into sensitivities. So, you know there is lots of interesting question about how to manage privacy in geolocation and I can point you to the IETF Working Group that is entirely devoteed to geolocation and privacy. There is lots of interesting discussions that go on there. But at the choice level, how do we need to address this? What I'd like to propose is that we punt on this and let the ISPs so of it and not do it in the WHOIS, how could we do that? We do things I by reference. Anything in computer science can be solved by another layer of indirection. Let's say we have a location server reference so an ISP can operate a location server that can offer application information for their subscribers. You know you say this is the location server for this net block, if you want to find out about location for these subscribers, go there. And then that server can respond to a standard web service, and so if you want to find out where this ?? this is the APNIC meeting network where I represented this last time. So you can send a query, it says me tell me about this IP address, or tell me about this prefix. And you get back a nice structured representation of, you know, telling you you are in Italy, the city you are in is Rome so you don't have to pars this unstructured format that we talked about before. The advantage of this, this indirection is all that's in the WHOIS database is this relatively unsensitive address for a location server. It doesn't actually tell you any location information at all. So it actually get the location information you have to go ask this server and at that point the ISP can decide, do I want to awe then Kate who is requesting? Am I going to give different access to my local law enforcement agency versus Google versus Yahoo? And the ISP knows who is asking and has control over that. So you are really empowering ISPs to control access to their subscribers location and that has, you know, sort of inherent privacy benefits in that there is a relationship between the subscriber and the ISP and they can negotiate with each other how to manage their privacy.

So, I'll skip mostly through this. It's got sort of the converse of the positives and negatives of the previous section. You have got a structured format for location. And a lot of good support for international identification because it's all XML on this web service, there is a lot of privacy benefits. It's more complex in terms of how a consumer would get hoax information and you need new databases and new servers to make this all work.

Obviously there is no caveats that go along with this. It's not an ultimate solution, so there is a lot of existing location databases that provide the IP grade location. This is not a replacement for them. I would argue it's an enhancement no their service. It's another source they can draw to come up with location maps of the Internet. And the major pit falls that a lot of people stumble on is there is not guarantee of accuracy. This is data that has more authority if I might say because it comes directly from the source. It's directly from the ISP rather than from some guess, but operators can lie. If I am an operator in Slovakia, I can say my customers are in the US so they can get access to NetFllicks or the iTunes store. But there are some incentive to lie, you could argue there is an incentive to make a wide variety of applications to work so maybe that averages out so something like telling the truth. There is a lot of caveats that come with it. I thought maybe it's useful enough to ?? it would solve some folks' problems.

A question for the group is is this a problem? Is this a problem of getting location based applications to work right by providing, allowing ISPs to provide information on this? This is a valuable problem to so of? Are either of these proposed solutions worth doing and if there were such a system would you contribute data for your network? I am glad to get feedback or further questions.

AUDIENCE SPEAKER: Just to quickly add to that because I know you have to dash off. You have been talking about this from the problem of the subscriber and how can a provider of a service side by where the subscriber is so they can have services tailored to them or be denied access to services because content isn't licensed in their country. Actually, we see a lot of these problems in reverse. We have people we might host websites for them, they say we want our website to be, appear to be in this particular country because, you know, this is the national ?? this is our national website and this country, we just happen to be hosting with you in a box in another country and the big search engine, that everybody knows, thinks it's actually in the country where it's actually hosted in and it's affecting our search engine optimisation. We say to them, well, we are not actually going to do anything stupid here and you know and try to put location information in the RIPE database, what we are going to do it tell to you speak to the search engine because they took a no nonsense approach here and said if you are a web master you can sign up and tell us where you think this site should be and we'll reflect it this way.

RICHARD BARNES: I guess this would sort of so of that problem because it does provide that you form of declared location.

AUDIENCE SPEAKER: Exactly but it's about trust. That's the point I was trying to make and they have decided to trust website's owner and that works for them.

RICHARD BARNES: The advantage, the incremental advantage of doing this in a sent value database rather than with the search independently is this gives awe consolidated place where you can make that declaration and have everyone pick it up. There is clearly a question of you know whether that's a feature you want or not. Which you know sort of the general problem with this declarative sort of thing, sure. Shane.

SHANE KERR: There is definitely a demand for these kind of services, I don't know if you'd call it a problem but people really really want this, right and there is commercial service that is try to figure this out and try to keep databases up to date and. I think the proposed solutions are okay. The problem that I see is that we have five different WHOIS servers. Three of which use an RPSL like format and then there is LACNIC and ARIN. So it means that anyone trying to consume this data would have to implement at least three difference solution ifs you use the WHOIS as a way to distribute this. I don't think that's the worst thing in the world but it may be better to use other ways to do this. And I hesitate to propose any, but that's my suggestion.

AUDIENCE SPEAKER: Kim Ming from AFNIC. We already have a location DNS type to store this kind of information, but for my experience it's mostly unused. Why?

RICHARD BARNES: That does exist and there are a few of those records out this. It has a really limited syntax, you can give a Latin long, I don't even remember ?? lat long latitude that's all it's got. So you can't abstract it all. So you have a point at all times. You can't say it's within you know 500 kilometres of this area, it's in this big area to provide some fuzziness say for privacy reasons. And there is no way at all to say it's in this country or it's in this city. So this provide awe little for express I haveness and it provides the sort of discovery problems in the DNS sessions when we were talking about reverse DNS, if you want to talk about hosts and you get into those sorts of issues. So, I'll be glad if people have ?? I will let the meeting continue for now, if people have suggestions for where to take this, if anyone wants to write up a policy proposal. I'd be glad to talk off line.

WILFRED WOEBER: Thank you for the presentation. What's the interest? What's the situation? I would have another couple of questions but I think we are running short on time and I don't want to keep new this room for too much longer.

RICHARD BARNES: I am around the rest of the week.

WILFRED WOEBER: Could we have some sort of feedback? Do you think that this Working Group should actually follow up on the proposal or on the, not the problem, on the demand and investigate which solutions, whether one or two or maybe a different one is reasonable? Could you raise some body part or not or? So in favour? Okay, not really overwhelming. So, I guess thank you again for the presentation. I think we will have a couple of additional questions on the mailing list and then we'll see like whether there is more interest, because we are not really sort of quarum here for the Working Group for the unfortunate side effect of the previous Working Group. Just stating it as a fact and I am fine with the result.

Okay. So thank you very much.

We move onto the next item on the agenda, which is Paul Palse from the RIPE NCC walking us through the RIPE database presentation.

PAUL PALSE: Hi, I am Paul Pals from the database group, the database manager, and this is the update.

So, I will be just giving a short introduction to the database group and then some status on action points and outstanding deliverables. Some additional projects that we have completed between RIPE 60 and RIPE 61. And some highlights from our labs publications and then we can go into some question and answers.

So, a little bit different than in my previous presentations, a little bit about what the RIPE database is, there may be knew bees here. We are a public Internet information repository for the RIPE service region, so we are also a routing registry, we also store additional information that resource holders store in the RIPE database. We also provide a service that returns RPSL data from the RIPE database of other registries, we also publish tools on the ripe.net websites and additionally we have started building some prototypes of ideas that we have on the RIPE Labs site.

This is the team. The RIPE database group. And normally what I also did is present a whole lot of operational statistics in the previous presentations. This time I just want to quickly point you to the, first of all let you know that there are operational statistics on the frequency of queries, updates, etc.. again on the ripe.net website.

One little fun stat that we also produced last time that we see this incredible amount of something we call ego queries, which are housed that's query for their own IP address in the RIPE database, we are not just talking about some queries, we are really talking about half a billion queries, approximately about half a billion queries between RIPE 60 and RIPE 61 and we are still quite interesting in what application actually produces these queries.

Our first line support, our RIPE DBM team is this group and I just want to quickly show you a little bit about the amount of tickets that they handle for us, or sometimes, you know, they do the first line support and then escalate it to the database group if they need some additional support. Maybe when you download the slide you can see poll how many. By the way the figures are the amount of tickets per month. In these categories.

So, action points.

Let me start ?? this is ?? these are the action points, fixed mirroring and cleanup forward domain are additional tasks that we were asked to look at.

First of all, we have action point 54.3, which was adding ?? requiring making the MNT?BI mandatory on person robs, which was the last outstanding object that was not necessarily ?? didn't necessary need a maintainer, and as you can see it's action point 54.3. So we have taken our time to finish this project, but we have announced in the previous RIPE meeting that we would deploy this soon after it. We actually managed to deploy it not so long ago. And one thing to note is that if you now do want to create a person maintainer per, you must actually use this special start up form to do that because of the circular reference problem. So, we also did a really quick stat on a little bit of an impact analysis on the deployment and see how many people are actually confronted with these changes.

The blue line shows you create and update successes. The red line shows you the, where update failed and returned the fact that there was no maintainer supplied or no existing maintainer existed in the database. We suspect that some of these errors are may be scripts or so that have been written against the RIPE database so maybe you can have a look at your back office and see if you get frequent error messages returned. We will also start, keep a little bit of an eye on these lines and make sure that we see them dampen.

Action point 59.1:
We have completed all the ?? actually this is all about reverse delegation safeguards preventing redundant hierarchal domain objects to exist if the RIPE database and cleaning up the more specific objects. We have done the codes. The DNS group was also heavily involved and we are now looking, because ?? and we have to jointly deploy this, the code for this, so we are working together, I think quite soon after the RIPE meeting we'll run a project and jointly deploy this new system.

Then something that during RIPE 59 meeting there was a presentation by Piotr about being able to do free text searches on research objects. The last RIPE meeting we didn't really make any major announcements on this, but we have actually, and I will come back to that a little bit later, we have just published a new free text search for the RIPE database and actually that covers most of the use cases that Piotr highlighted that would be quite helpful to have on the RIPE database. I will show you later.

Then, action point 60.1 and point 2, was all about the ping?C attribute. There is RFC, it has been approved. But we haven't implemented this. We found there were minor discussions on the mailing list. So we have to actually ask you some tangible questions.

First of all, the attributes: Should we add it? On which objects should we put that? And then there were a few little lines about if we should actually check that when the object, or when the attribute ?? when the value is added on your attribute if we do get some response before accepting it? Should we introduce periodic checking? Maybe a last seen attribute? And again, which objects to cover.

And obviously we could take this ?? we could make this ?? we could actually come up with a light weight proposal and a prototype and publish on labs for you to have a look at. Maybe it's something we can do during the Q & A session. Some feedback.

Then, cleaning up forward domain data, that has also been quite a long outstanding item. We need ?? there were 43 top level domains still using the RIPE database, or there was data in the RIPE database for these. We contacted them all. We found that four of them are still, still have data in the RIPE database, and indicated that they would prefer a little bit more time to find their own solutions. 26 we have actually removed. And 13 we have received absolutely no response, we have gone through the effort of trying to contact them, so I think the time has come to find maybe if some of you can maybe help us in that respect of proper people to talk to.

Fixing mirroring, this is also something that has frequently come up. This is what we called previously mirroring actually having a data of the other registries and routing registries served from the RIPE database in RIPE RPSL format. Because there are differences in RPSL implementations and not all the registries actually expose their data in RPSL, we frequently got out of sync with other sources if, for instance, attributes changed our scripts just dropped those objects. We frequently had to reload them, just before the RIPE meeting to say yes we are working on it to improve the data but never ever did we get any good shape. So we actually pulled the plug on the mirroring service and started from scratch and actually implemented something we call now the global resource service we now currently serve the data from APNIC, LACNIC, ARIN and ?? LACNIC is for the first time that we are actually exposing their data through this service. And we are working with AfriNIC currently to also include them. One thing to note is that we do not include any personal data. So we just the operational data. And one of the nice features of this service is that you can do hierarchal searches over the full ?? all the sources in the RIPE database.

So I will quickly go over some RIPE Labs highlights. Since the last RIPE meeting, we improved some of the parsing we did, or some of the parsing of the attributes and transforming them a little bit. So that we could provide strong object reference ?? a strongly typed object references, we also went through the effort of parsing RIPE lists and RPSL lists and normalised them so that they became attribute value pairs. We published an RPKI IRR, basically a copy of the RIPE database that serves row

VASCO ASTURIANO: Route and route 6 objects. We updated the heuristics of the abuse?finder tool that we published before the last RIPE meeting, based on some of the feedback we got from the anti?abuse Working Group during the last RIPE meeting. We published some interesting graphs, nine years of the RIPE database objects and ?? as I said, the RIPE registry global resource service, which is the, what I talked about in the previous slides, and a new free?text service, which I will quickly demo in a second. And last but not least a prototype of the RIPE database that shows a clearer separation of what is registry data and registrant ?? resource holder data in the RIPE database. We have a separate presentation dedicated to this in this session. I will quickly show you that free?text search feature.

So, I will first start with a search that will return quite a lot of objects to show you that you could actually filter, if you are just interested in a certain object within the results set. You can also actually say exactly which attributes you want to search in, and it's actually also does a union of the attributes if you select multiple objects. However, we said we do not expose personal data if you actually click on the objects, you'll be able to do, navigate to the details which is a single lookup which does not have the same ?? we will actually do some accounting to make sure that you don't mine the RIPE database for personal data, so after a limit of a few 100 queries per se, we will throttle your access a little bit. And also here, because this is on the RIPE Labs, you will be able to actually get the response in XML and Jason, so basically it reuses the restful web services in the background.

So back to the slides.
One thing that we would really like to give you, give heads up on is that we would like to come up with a proposal to simplify some of the complexities in our RPSL implementation. Like I mentioned previously, one of the API updates that we did is parsing lists, we actually separate comments out of the values and make them an attribute in the XML schema. We also would really like to make sure that we consistently deal with spaces and tabs in the RIPE database. Maybe look at normalising the text string, the IPv6 text string because there is different ways to do the notation. Some people remove the 0, some people leave the 0s in and frequently we get tickets about people not finding their data U but that's just because it's not consistently ?? there is no consistent notation. And some of the very complex mime arrangements that we support for mail updates, that we think we could come up with a better solution for.

Just to give you an idea, last but not least, about the complexity that we need to actually have code in place to deal with, is this is a valid INET NUM object in the RIPE database, it actually exists in the test database. We have just been really creative and constructed this so basically our line of thinking is to turn this into something that looks more like this. This will be easier to pars and probably a little bit more consistent, and visually attractive as well.

And that concludes my update basically.

WILFRED WOEBER: Thank you Paul. One of the first questions I'd like to have is how are we being to answer your questions like, there were one or two slides where you wanted to collect input, so given the fact that we are short on time. I'd suggest ??

PAUL PALSE: We can take it to the mailing list. I mean we can also find other ways.

WILFRED WOEBER: There is always hallway discussions as well, if anyone has ideas or input, then either to Paul or whoever. There is input already, go ahead please.

NICK HILLIARD: Could you go back to the slide that describes the pingable and Ping?C attribute. Okay. The RFC seems to be relatively straightforward. It says that the pingable attribute applies to route and route 6 objects only. So, is there any possibility you could explain these questions a little bit more because it's not clear from the RFC exactly why there is a difficulty implementing it. The reason that I am asking is because I want to just put very, very kind of noddey functionality for pingable into IRRToolSet just so that it won't winge about it. But I can't test it against anything until there is a database out there that supports it.

PAUL PALSE: These were things that we picked up from the little discussion about it on the Working Group list. Some of the discussions were around adding this kind of functionality to it, so I am not saying that this is actually included in the RFC, but these have been mentioned and, you know, if anyone thinks it's useful or maybe it should be an incremental thing where we first just go for the RFC and look further, it's up to you.

WILFRED WOEBER: Should we take your suggestion that you want to see a very down?to?earth, very bare minimum implementation of exactly the stuff that is described in the RFC?

NICK HILLIARD: I think it would be a good start. You know, it's a fundamentally ?? it's a really really useful attribute to have in a route and a route 6 object and, you know, if it's relatively easy to implement in the RIPE database code, I don't know whether it is or not, then, you know, could we see it sooner rather than later? In a basic format.

PAUL PALSE: In a basic format. Yes, we could certainly investigate that and ?? yeah, I think the RFC itself adding the attribute it's not a very complex operation. All the other tough as much more complex.

WILFRED WOEBER: I guess just implementing the syntax it straightforward and we can probably start with doing that? And adding the semantics like do we actually check the existence of the address which is given in the attribute, we can maybe take that on later?

NICK HILLIARD: Those are details, the existence is an operational problem that if the operator gets it wrong, I mean that's their problem. I don't see any particular need to put it in the database. But yes the details should be thrashed out on the mailing list but it would be very useful to have the core functionality implemented hopefully soon. As an operator, I actually want to use it to do ping probes and testing and that sort of stuff.

WILFRED WOEBER: Sounds reasonable. So, Nigel, could you please record some sort of an action item on the RIPE NCC. Thank you. Next one please?

AUDIENCE SPEAKER: Just a short comment. I am very thankful about these three texts because it covers my idea about searching for key search. It does that.

PAUL PALSE: Thank you.

AUDIENCE SPEAKER: Just a quick question about half a billion of ago a queries, do you remember what they come to the WHOIS port or from web interface?

PAUL PALSE: It's not ?? it leaves ?? I mean, we can't distinguish. We can't see that. We do not see where the queries are ?? if they are made to port 43 or to web interface, they don't leave that.

AUDIENCE SPEAKER: From the source areas?

PAUL PALSE: We haven't really done ?? just note it, that's exactly what it's ??

WILFRED WOEBER: Okay? So then again, thank you Paul. And I'd like to ask Denis to give us the next presentation related to ideas, prototypes and suggested developments to redesign the internal structure of some stuff.

SPEAKER: We briefly touched on this at the last RIPE meeting. We said we had this idea and we are going to further investigate it. We have done an extensive article on RIPE Labs about it but realise that not everyone has read that. I am going to give you an outline of what the problem statement was, why we did this, what we have actually done, the benefits and the impacts on it in a very quick way.

So what was the problem that we actually tried to so of? It's recognised that the RIPE NCC as a registry needs to have accurate reliable information in the database. But that alone isn't enough. There has to be trust in the accuracy. And the perception needs to be good, the registry data is accurate. And the problem we have here is the RIPE database holds registry information, we also facilitate the storage of resource holders information all the way down these hierarchies. And for anyone who looks at this data, there is absolutely no way, unless you really understand the sin stacks of RPSL and are maintainers you can actually work out who is responsible for which part of this data. We know that a lot of resource holders do actively maintain the data and keep it accurate and up to date. We also know there are a few who don't really care how accurate it is and they put it in ten years ago, it probably still works and they never looked at it since.

One of the problem with that is every time anybody finds one piece of information in this database which is inaccurate, unreliable or completely false, it reflects on then entire data set. So, the perception is that the data in the database is bad, and this actually reflects on the registry as well as those resource holders who don't maintain their data.

So, what we want to do is separate out the data and make it very clear. So, the goal is, to have a very clear visual indication of who is the responsible party for any piece of data that you look at. The method that we chose to do this is to have two completely separate physiotherapy it will layers. Now this is not all the data in the database. That only applies to PA that is right part of the data where there is a shared responsibility between the RIPE NCC and the resource holder. For example we talk about allocated PA objects, assigned PI aut?num where a certain amount of this data is classed as registry data and some of it is user data. All the rest of the data is completely untouched. I'll explain a bit more about that in a minute.

So for these, this very small selection of data that's involved, we have actually created two layers.

There is the fixed registration data. This is the data that the resource holder has supplied to the RIPE NCC and which we then maintain in the RIPE database. For example, it might be the net name of an INET NUM object. The resource holder data is that part of the data which the resource holder can change at any time, your phone numbers, your contact details, your e?mail address, your maintainers. So, some of the benefits of doing this, it's easier to identify which part of this is the fixed registry data. And as the RIPE NCC is the registry, we can assert a certain level of accuracy and trust in that part of the data. The data we ?? that we facilitate the resource holders to store in the RIPE NCC, whilst we would like to think a lot of it is accurate, we have no direct control over that.

It's easier than to identify who the maintainer is of any of these pieces of data. You know straight away if it's the registry that's responsible for it or if it's the resource holder. We also actually make it easier for the resource holders to be able to update all of their information. Currently for things like the allocation objects where the data is shared between the RIPE NCC and the resource holder, those objects are actually maintained by our registration services. So if you want to actually change your contact details if there, you have to log into the LIR portal, go to the object editor, get the object, and change only those bits which you are allowed to change. By separating these two layers of data, the user part of the data which you can change, just becomes a normal object. In the case of the allocated PAs, just a light object. You could update that the same as an assignment object. It has your maintainer on it. You change it. So there is no need to use any object editors in the LIR portal any more.

The way we have actually done it is we have created for new object types. Here they are. And we will deprecate the AS block because the functionality is AS block has now been incorporated into these new object layers. As you can tell from the names each of these four objects has a very strong coupling with the current objects. And this is where we have actually split the data. The registry layer will be in the reg objects and the user part will be still in the normal objects.

We have also added three new attributes. Org Revenue is a reference to the organisation objects at the registry layer. It's the equivalent of the org attribute in the INET NUM objects. For the rental statutory information, we have dropped the change attribute, because we all know it's pretty much useless. It only has a meaning to the person who put it there. So for the registry data we will two new attributes created in the last change which obviously means whence it was created and when it was last changed. And there will just be a simple date and time stamp with no e?mail addressed, that's generally abused anyway.

So, basically, all the ?REG objects will contain fixed registry data, all the resource holders changeable data will be in the existing current objects. This only applies to data that has the shared responsibilities. I want to make it clear that all your other objects, your sub?allocations, your assignments, your routing, your DNS, your maintainers, none of that has been touched at all. This only applies to this very small set of data where there is shared responsibility between the RIPE NCC and the resource holder.

That was the first point of the impact. So basically the only impact is on the shared data. But that does mean that you might need some changes to your internal processes, for example, when you create aut?num objects, the ASNM will probably be in the fixed data. The syntax of themes has changed very lightly. Also if you have any script that pars its output of a query where the data is separating those layers and you query for that data you will get two objects back instead of one, so you have to accommodate had an in your query output.

We have also tightened the implementation of address policy rules in the date database. Currently in the update engine, we have business rules that are scattered across the code. While we are doing this, building this prototype, I looked at the list and I decided it was actually easier to just completely rewrite a new business rule module than to try and hack around with the existing business rules that were spread everywhere. And in doing that, I actually realise that had there is quite a lot of halls in the business rules that we have at the moment. Things like the very strict rules about what status values you can have and how they can relate to payment objects and child objects and all the rest of it. The current implementation doesn't fully match the policy. So what we are doing, this prototype we have completely rewritten all of that and you may find that some things that you could have done before, which you weren't supposed to do, you can no longer do in this prototype.

So where is it? On RIPE Labs, we have written quite a long article giving all the background to this, explaining what it is we have done, why we did it, where we have included an of technical details, we have got the object at the moment place for these new objects and the ones that have changed. We have got some example data in there which shows the different ways in which this new separation would be used. And we have also built a prototype with this code and with a converted set of RIPE database data. So we took the data from about two weeks ago using the split files, and we have basically converted it into this separated data and loaded it into the prototype. We haven't included all of the data, for example, there is no demain objects in there because none of that is changed and it doesn't really ?? you don't need to be able to look at data that hasn't touched. But we have made, all your data is in there. All the maintainers are the same. All your credentials are the same. You can query for your own data and see what it would like like this this separated arrange.. you can do updates in syncupdates. Mail updates using your own authentication so you can make sure you can still work with all your other data untouched and you can look at what you can do with this separated data and there are a few extra newer error messages in there if you try to change the fixed data. Bear in mind this is the first situation of a prototype. We have made some assumptions in this, for example, the actual data that we would regard as fix and registry data, I have made assumptions on that. This isn't set in stone. It's just a prototype so. Don't worry too much if you think that fixed data shouldn't be fixed or that variable data should be fixed. That's all open to discussion at a later stage.

So, any questions?

WILFRED WOEBER: Yes, maybe while you walk up to the microphone, maybe one sort of about the status of this thing. This is the concept and this is the prototype. What's sort of the next steps you would suggest? Is there something you have decided, or are you actually out to collect feedback? Do you want us to test drive and then later on we decide on this is the right way doing? Or sort of what is the formal status of that?

DENIS WALKER: I think perhaps if and re could answer that one in terms of how we move forward on this and what sort of timescale we are looking at.

WILFRED WOEBER: The reason I am asking is that there will be a little bit of impact to some parties, all of those who probably have some sort of automated scripts, home grown programmes, interface to say databases that sort of things. I think we make sure we do not spring big surprises or anybody.

AUDIENCE SPEAKER: Well, it's also procedural question. And I think what we'd like is we'd like to get some guidance from the community. We can go through a PDP process, because that's a significant change to the database. Part of this process there will be be impact analysis of course. That will be period of by the RIPE NCC so that will be more integral. If community thinks otherwise, we can also follow a less formal process. We have set up a prototype and we can perform impact analysis based on our experience with this prototype and share this with the community. So, I think we need some community guidance on that. How formally we would like to move forward.

AUDIENCE SPEAKER: This proposal is going to be withdrawn for awhile of course but as a proposer, I have to state this this prototype will help to introduce the idea of linking to the sponsoring LIR, because later it could be taken from your internal database and put under your control in the WHOIS database, so in this way I support the idea of this new prototype because it looks nice and helps those things better. I think.

WILFRED WOEBER: Thank you. I should probably in that context also state that restruct the formal withdrawal or install of this thing is not actually having the effect that nobody cares about it any longer. Quite to the contrary, we are definitely going to work on this and I'd like to take this opportunity sort to make sure that we are going to make progress into the direction that was set out in those three proposals. Thank you.

SHANE KERR: In general, I really appreciate the ideas and goals behind this effort. There is a few minor details though. One is that this moves further away from RPSL, which Wilfred talked about breaking automated tools and things like that. There has been some discussion earlier this week about looking at RPSL in general. So, I think maybe we should maybe, not to confuse two separate issues, but they are not totally separate, right?

The next point I have to make is that if we are talking about changing the way the database works in relatively major ways, maybe if we could take this opportunity to do even more radical surgery. Now the other thing is that we should possibly also try to coordinate this with other RIRs. And the reason I mention these two together is that APNIC and AfriNIC use software based on the RIPE NCC software, right? Their database structures are not identical but they are very similar. I also, because ARIN has a totally separate schema that they use for things. However, there are parts of that design and schema which makes sense. And maybe it makes sense to work with the other RIRs and try to end up with a model for registry data that's more compatible across all other RIRs, including ARIN. And I say that because I helped them design it, so...

WILFRED WOEBER: Okay. So we may come back to you to get a little bit of guidance maybe but I think it's a very good suggestion and I also seem to be hearing the feeling, although there was only two or three individuals bringing it up that that we should maybe go through a rather more formal process as soon as we want, as soon as we know what we want to achieve and how to achieve it and actually maybe really resort to the PDP process. Thank you for that input.

DENNIS WALKER: There are many other changes we could and would like to make to the database structure in terms its design but you can't really do too many changes in one go.

SHANE KERR: I agree with that. I think maybe it makes sense to envision the final ?? well, you can never know because the Internet evolves and you can never know what's going to happen next. But have some more strategic view of what you want the database to look like in the long run and come up with a step by step way to get there. This might be step 1 or step 0 or step 10, whatever.

WILFRED WOEBER: Thank you for the input. Thank you, Denis, for the presentation. And I'd invite Piotr as the author and the presenter for the next two items. The next thing is what I called IDN "ifly" in a sloppy way. There is a much better title for the presenting. So the stage is yours.

SPEAKER: My name is Piotr Schwarski. I am from University of Technology Computer Centre. The title is Internet identification of Resource Registry. As Wilfred said that's pretty much the same as in the agenda.

First of all, recollect the reason for that. The first reason is that various languages and alphabets are used automatically in RIPE service region. And the second reason is that there is even more languages and alphabets used by number of inhabitants in RIPE service region and I need to set up, we are talking about different alphabets, one of them are based on the English alphabet or Latin letters and some of them are non?Latin, like for example, from Russia, Ukraine and Belarus and so on. Greek, of course, and probably some more. Arabic alphabet and number of inhabitants, I was thinking about immigrants, so for example, Chinese alphabets, Japanese and so on. We have a load of them in RIPE services region. And some alphabets use Latin characters and some not.
And the main thing is that using names, addresses, words and so on, using US ASCII generates problems.

Just to bring these problems to your attention. I prepared an example. First of all please consider two Polish words and three meanings of them. I should say that in Poland we have in Polish alphabet nine extra letters as an addition to the English letters. The first Word is Polka, which means, if you write it with a capital letter, the Pole, Polish woman. If we write it with the non?capital letter, that will be Chezch national dance, it's just Polka in English. If we introduce the Polish letters to it, it is chef or rack and it's Poka in Polish. So three different words ?? sorry two different words, three different meanings.

Now, consider those two names put into database. First is the lady called Maria Polka, Maria is just Mary. The the second lady is called Maria Polka. And if we put those two names in the database, both of them will be Maria Polka, so those are are not the same persons, but the database will accept only English letters. People don't like mistakes in their names. Definitely people do not like mistakes in their names. I don't know if it was in the database agenda or in other Working Group agenda, but there was mistakes in the names and the agenda was corrected.

Okay. Next example, with the same words. Now consider two other words in the database. This beginning, the U L means street in Polish, that's the acronym for street. So that's street in the name of Marie Polka and that's even more tough for you guys because Polish language is flexible and that's the genetive in Polish. So that would be the street of Maria Polka or just the Polka dance, for example. And the second street, the second address will be the street of Marie Polka, in Polish. And once again, for both of them we have to put this English like text in the database. So that's a problem because people definitely post and law enforcement agencies don't like such things. Because that could raise problems with contacting, especially if we want to send a legal notice to the real address and if there are two addresses in the huge city which could be put into database like those two examples, that could raise a problem. Sometimes, when I read the extract from database I know that there is a word which should be understood in different ways and sometimes I am not sure what is the correct, I need to use Google, sometimes it's easy, sometimes it's not.

So the idea is to modify the db software to introduce internationalisation mechanisms. There is a very initial idea because I know there is a lot of problems which have to be addressed and my first idea was to introduce some extra fields called name?IDN as an extension to the current one. So these are are examples, etc.. and that extra fields could be just put in the objects and we can send an update if both fields, so, for example, I can put Maria Polkay or Maria Polkey in the US ASCII and this person then will be the current one with the Polish characters.

Problems: I mentioned the problems. There are a some of problems which we should address and resolve before that kind of mechanism will be introduced.

Which code page should be used. I think the UTF?8 is the most correct answer to that question. The second problem is does automatic code page translation should be considered? A load of people use a load of code pages, when sending e?mails, when using web browsers we shouldn't forget that updates, the db updates are sent using both mechanisms, the web updates recollect sync updates and mail updates. So these automatic code translation could be considered and could be a problem also.

If yes, which code pages we should use? ISO family or Windows code pages or the old IBM code pages? There are a load of them for a load of languages. Just to mention, because I used the Polish example. If I want to use a code page for the Polish letters, there is over 20 of them and the most popular of course is the UTF, the ISO family, the Windows and maybe now more deprecated by IBM code pages also.

Now, does automatic US ASCII conversion be considered when possible? Sometimes it's easy, when I use my example with Maria Polka, I can usely select the translation, I can easily make an auto mat to do that. But I am not familiar with for example Georgian alphabet and I am not sure if they have a strict rules of translating Georgian names or addresses or words to US ASCII. So, if we think about that, should we consider that and implement that?

And another problem is that WHOIS client should be able to ask for a limited or full data. Because some queries, some old WHOIS clients are still not international, they do not work in the Internet national identification environment. They are just plain text with 8 bit characters used and so on. So we have to think about the extra parameters to WHOIS clean just to request the full data, just to request not the international identification data. Of course the web page will be much more easier to implement the web page. But still there is sync updates and web updates we should think about the impact on the clients scripts.

The next problem, last but not least, interoperatability problems with other databases. We have heard right now the mechanics and different registers databases, are other registry databases are different, so we have to take that into account. And finally, pros and cons.

I have only one pro and one con.

The data will be more accurate in my opinion, because if we put them in the database in the native language with with native characters, all the law enforcement agencies will be happy to read that kind of data and to write correct e?mails without problems.

And of course the biggest cons, some problems have to be issued and solved. And that's some problems was on the previous slide. So I am not mentioning them here.

So, any questions, comments? I know that it's a tough topic.

WILFRED WOEBER: First of all. Thank you very much for bringing it up and in particular at that point in time pause I think it's just perfect timing when we had a discussion about having more accurate, more reliable thing. I would just think that also the RIPE NCC has already done a little bit of thinking because if you have got two contracts with that character set use and you try to put that into sort of authoritative registry data, they will have the same problem I guess so I think it's the perfect point in time to bring it up.

AUDIENCE SPEAKER: Raymond from Aliza, Finland. Thank you, it's a good idea. Scandinavia has also three of these special characters, but suppose I would have a Polish named client, what do I do? I have a different country code.

SPEAKER: I mentioned about problems. That's another one.

WILFRED WOEBER: One of the ideas I already had while you were talking is that probably sort of some formal rules in various countries we could investigate, because you can have a person coming from a different country and having a name in a different character set so that probably ways of dealing with that problem. You could maybe try to copy that or to learn from them. Shane?

SHANE KERR: I think this is a good idea. I think actually ?? I have heard this discussed a lot of time in different forums but I think international identification is one way to call it. The other way it call it is locale identification and the reason I mention that is because localeisation means if I don't live in your locale I don't know how to read or write your character set. So I think the way you propose to having an IDN version of the various fields is a really good one. We just need to keep in mind that people who, people in Arabic countries probably don't have a Cyrillic keyboard, so they need an ASCII way of looking these things. As long as we keep that in mind and don't get too local recollect then I think it's a really great idea and a lot of work to keep the RIPE NCC busy.

SPEAKER: I am pretty much sure in the RIPE NCC there is a multi?cultural staff working and they probably have the idea how to use their own languages and can introduce an idea to that not formal proposal.

WILFRED WOEBER: I think this is going to give us some fun. But I really like ?? I really like the fact that you brought it up, because I think it's just something we should start thinking about because even if the DNS guys managed to deal with that one way or another. We should ?? there is even in Japanese script, there is new top level domains and that sort of things and sort of we are still alive. So we should find some sort of solution for that I guess. Okay, thanks for this presentation and may I also ask you to just go on with the next one.

SPEAKER: That will be much simpler. The reason for that not a formal proposal is that majority, if not almost all objects use password authentication, I spoke yesterday with Denis and he said that from last few months, the majority of updates was sent using password authentications, a lot of ?? there was something like a few millions, yes?

WALK: Last five months, 1 million passwords over 60 thousand ??

SPEAKER: The second number ?? okay, so that the answer was the last five months 1 million updates and using the password authentication. 60,000 used PDP and occasionally the others used X509. So as we can all see, a lot, if not almost all used password authentication, so that's the main mechanism.

Current implementation supports only MD5. And as probably all of you know MD5 is not collision resistant. It's considered as not suitable for further use. It's generally considered as being cryptographically broken. So, as the conclusion objects are not well protected and because we have been talking about the resource certification, I think we should take into account that if we want a strong mechanism of certifying the resources, we should also think about stronger mechanism of protecting the objects in the database.

So, my idea is that we should modify the db software and introduce stronger hash algoriths. First of all, we should skip SHA 0 and is that 1 because they are broken MD5 and I think that's obvious, I do not include that in the slides. We should think about the is that 2 family. I am not sure if all of the SHA 2 family RIR this strongest ones, that's not ?? I was not thinking about that in details. Maybe ghost, we should introduce, that's a kind of ?? I have read that there is some problems with the ghost hashing function so that's the initial idea. And maybe others? And when I said others, I have to say that it comes to my mind that we should give a madate to someone from RIPE NCC, the first person which came to my mind was the chief technical officer, to introduce a new hashing algorith on his own decision. The reason for that that there is still a contest for SHA free and I am pretty much sure that this algorith could be widely used in the future and could be introduced in the database. So, I am thinking about giving that madate to Andre, if of course that would be a good idea for everyone.

And that's the first part of the not a formal proposal. And the second part is to remove support for MD5 within one or two years from introduction of new algorithms. We have that in the past. We removed support for non mechanism for made mechanism and for CRYPT PU algorithms. So we have basically RIPE NCC have the experience with that procedure, with that process. So we have taken, or they have taken the lesson from the history so that will not be a problem I think.

Still, this removal is because the MD5 broken, so we have to insist on moving to the stronger algorithms, not on the voluntary basis but on the mandatory basis.

So pros and cons.

Objects will be better protected against abuse.

And the cons: LIRs have to change their maintainer objects. So basically some scripts should be changed. And some objects will be locked after MD5 removal. We have that lesson in the history. We have taken some experience from that, that the process was being, has been doing twice or three times

WILFRED WOEBER: At least two times. We should still remember how to do it properly.

SPEAKER: I don't know how many maintainers are still locked, but I think that doesn't have an influence on the idea.

So, are there any questions?

AUDIENCE SPEAKER: I just want to point out that MD5 CRYPT is not actual simple MD5 hash. It is much more stricter using salt and, if I remember correctly, 1,000 time. It is not easy CRYPT sparing to this, for example. So this of course is potential problem but not so strict.

SPEAKER: Yeah, but as far as I remember, there is a possibility to generate a collision within reasonable time on a normal computer.

AUDIENCE SPEAKER: Now, you can produce text with the same MD5 hash, but it is not MD5 CRYPT password which hashed thousands of times.

SPEAKER: I know that.

AUDIENCE SPEAKER: David Freedman, I missed the first minute or two of your presentation but the question I have to ask and the thing I am really missing here is why would we invest effort in preserving simple password schemes as opposed to signatures?

SPEAKER: Because the majority of maintainers use the passwords, almost all of them.

AUDIENCE SPEAKER: Why?

SPEAKER: I don't know why they are using passwords, but I think it is the mechanism, they do not want to invest into changing their scripts to ??

AUDIENCE SPEAKER: Perhaps we could invest our time more wisely in creating software to help them migrate to us autoing signatures.

SPEAKER: Have you heard about the stats from Denis?

AUDIENCE SPEAKER: No.

SPEAKER: I will repeat that. 1 million updates over last five months using the password indication. 60,000 updates using the BGP authentication and occasionally using 5 X 09 authentication, that means almost no one wants to use other mechanism.

AUDIENCE SPEAKER: Have they been incentivised to switch? That's the question.

WILFRED WOEBER: I see the beauty of this proposal, as long as we provide the easy way, there is no incentive to do the right thing if the wrong thing is more comfortable.

SPEAKER: I think this kind of proposal will die in the PDP process because people will definitely be against that. But that's my personal point of view.

WILFRED WOEBER: My suggestion is that we just go back to the mailing list we propose to deprecate the password technology from the very beginning Kurt Kurt one of the LIRs requires a maintainer password to maintain organisational ?? you can't do it if you have only PGP keys on your maintainer ?? you have to withdraw it and mail it in, you can't use the other portal.

WILFRED WOEBER: Thanks for the input.

SPEAKER: Are the X509 certificates used on LIR portal or not? Because there is something like ?? on a LIR portal ?? free free I think Curtis is referring to where you take the password and it gives awe cookey for a certain amount of time in the web updates, is that right Curtis?

WILFRED WOEBER: Okay. Andre.

AUDIENCE SPEAKER: Andre: I think it's a relatively simple and practical improvement so I think it's an improvement. I have a question: What's the reason for the third hash algorithm? I mean, I think if we watch the space we migrate to more safe algorithms in time, I think we are safe. So my proposal would be SHA 2 we think about cast and just migrate to the next algorithm when SHA 2 becomes week.

SPEAKER: That was the idea of my mandate. That if somebody will be broken like is that 2 for example, because they are as far as I remember from SHA 1, then we will not have to go through the PDP process to improve the db security and, because we will give you a mandate to do so.

AUDIENCE SPEAKER: Ah, maybe I misunderstood.

SPEAKER: Maybe I was not clear, sorry for that.

AUDIENCE SPEAKER: First to say, improving the hashing algorithm and as you say it's not as strong as it is when it was introduced. And second just a historical point on why the passwords is still around. It seems what passwords people like to use. When BGP was introduced we went as far as to the RIPE NCC where I worked for a long time, we signed the licence deal with BGP international, we got 1,000, a couple of thousands licences just so that everyone could have one for free. Removed all the possible barrier that people could have so people didn't use it I don't know, because it's much easier to have a post?it with your password next to your screen.

AUDIENCE SPEAKER: I just have a fond memory to share and that is we acquired another LIR some years ago and part of their system for making assignments and registering them in the database involved the core of this system revolved around a small pearl script and what this pearl script did it basically produced a mail to be sent to Alto DBM and it said password colon, the password for the maintainer. This would all it would do and no one really understood what it would do. When you looked at the system and thought how much engineering is required to take this, you know, to sync updates or signing and whatever, you looked at it and thought it's quite a lot of engineering just so change one small thing. I do understand where people are coming from, but it doesn't make it any ?? that doesn't change the fact that we still need to deprecate these technologies.

WILFRED WOEBER: Okay.

ANDRE: Just a minor comment. One of the complexities is BGP will not work with sync updates, and some people use sync, am I right.

AUDIENCE SPEAKER: I am not a fan we can't do it because it won't work with X, let's just fix X and it can work with X.

SPEAKER: Can I make a comment on that?

WILFRED WOEBER: Andre, I think we have to repeat the question.

SPEAKER: No, I want to make a comment on that. I am pretty much sure that X509 certificates would work with sync updates, or am I wrong? Andre: I was mentioning BGP.

SPEAKER: If we want to depricate the passwords we can move to X 509 passwords. As I said, they are occasionally used and as far as I understood, Denis, they are not working properly with sync and web updates. And the documentation is broken, as far as I know.

WILFRED WOEBER: So, any substantial comment in my proposal to move forward is first of all, to thank you for the presentation and for the problem statement.

Secondly, I'd like to double check with the RIPE NCC could we actually sort of put an action on you to investigate sort of what the proper next level of protection for the hashes is. If the answer simply is it's what already has been suggested here, then could we ask you to actually go ahead and implement it? I presume this is not rocket science. And I think for all of us, including sort of the community, we should start thinking about do we really ?? what would we have to change, what would we have to add to the process to eventually be able to deprecate password protection? Is this something that sounds reasonable? I see a couple of heads nodding. Okay. Thanks, so again, thank you very much for bringing it up. And I think we'll take the whole stuff once again to the mailing list and to wake up to the wake up for the communitiment thank you very much.

(Applause)

WILFRED WOEBER: And the last thing on the agenda is item G. And I think most of that has already been covered in previous activities during this week, during today. Just to make this community aware of the fact that there are a couple of ?? that there have been and there still are a couple of proposals around. That's the policy proposals 2010?08, ?09 and ?10. They are currently having some sort of special treatment. We are going to come back to them. But, please watch that space. It's mostly an issue at the moment that relates to address policy and that relates to anti?abuse, but the implementation or this discussion also has to be synchronised with the potential implementation and the impact on the database.

It's just to make you aware of those things and please read up on the minutes and the transcripts of the other two Working Groups.

And coming close to the end, there is another piece of note here that the ICANN's review team on WHOIS policy, it's also sort of aggregate shortcut of the RT 4 WHOIS, that's an activity that is prescribed by the relationship between ICANN and the US Government. There is a review to be period of regarding the efficiency of policies regarding domain names, mostly, or in particular the WHOIS technology around domain names, as thereof the request to have one individual from the address supporting organisation to participate in that exercise and with my little bit of background regarding database architecture, access to database, privacy issues and that sort of things, I felt that it is almost some sort of duty to raise my hand and to become involved in that. That's probably not going to have a very immediate or very sound impact within the foreseeable future, but we may want to take some experiences that we have accumulated over the couple of recent years from data protection stuff, privacy issues with the database and with WHOIS access to take that forward as input to this review team, but even more important maybe, is to closely watch the development of sort of the review result of that exercise and try to find out whether there is anything in it which probably has or might have in the future an impact on how we are doing WHOIS business around the database. Just to make you aware of that fact.

There is a link in the web version of the draft agenda to the web page for this review team and if anyone is interested, just watch this space. If anyone has any particular input, just get back in touch with me. Comment please?

AUDIENCE SPEAKER: I just want to make a very little comment on the geography location of the database. We have ISPs ?? and we have, we also come in China a we got customers who to like to put commercial website all around Europe. For example their target market can be Middle East or central Asia or any European countries but obviously as a small ISP we are not able to provide to the entire world. We have seen a geographic location is very important to make it possible for us to offer customers servicing different location especially it's not that necessarily to build updates in every country where speed is concerned. So when you want to make a commercial web in specific countries the data centres need to be present there and if the database were able to offer the accurate location to the business and commercial users as would be used for in this case. Thank you.

WILFRED WOEBER: Thank you. Just a very personal comment. I am really feeling sort of the incentive to even think about the end users, the customers, giving them the freedom to declare what their preference is because if I happen to be somewhere in Latin America or in The Netherlands, then I would really like to not have the ?? not have the preference of the hotel I'm in, but I still would want to have the script and the information from back home. So I am sort of, very familiar sometimes with this issue. Thanks for the contribution.

Getting to the last one. That's input from other taskforces and Working Groups. We had the inputs from the Anti?Abuse. I think there is not too much to add to that. Just another piece of sort of watch this space. The IPv6 Working Group has submitted a policy proposal 2010?06, which you can find in the usual place, and this regards registration requirements for IPv6 assignments in the database. I don't think that is not contentious. I have not picked up any major problems with this proposal. But it will have, as soon as it gets consensus, it will have to be reviewed or commented on by this Working Group and eventually be implemented by the RIPE NCC. So that's the reason why it is also here as a sort of a place holder on our agenda.

And this gets me to the last one, AOB, or the of very, very very close to the end. And the question is: Do we have any other business or ?? yes, go ahead please.

AUDIENCE SPEAKER: Regarding item G, on those policy proposals, that were introduced in anti?spam Working Group, it's 2010?09 and 10. So, it seems like they are touching different areas. Anti?spam, maybe registration and certainly database. Do you have an idea would it make sense to maybe create an task force to attack this issues and ??

WILFRED WOEBER: Yes. Brian? We just want to get you up?to?date to the most recent situation.

BRIAN NISBET: Brian Nisbet Chair of the Anti?Abuse Working Group. This was raised at the end of the Anti?Abuse Working Group ?? well, the middle of the Anti?Abuse Working Group. 2010?08 will be taken back for discussion and redrafting with the database implementation issues aspects removed and a number of other facets. 2010?09 and 2010?10 are going to be withdrawn on the basis that a taskforce is set up and that's received. The anti?abuses agreement database's Working Group agreement and indeed Rob's agreement the taskforce will be set up to examine the issues brought about from this and the issues that we raised in NCC Services on Wednesday. So, yes, I think is the short answer.

WILFRED WOEBER: My understanding about the next steps in the procedure is that at least within these three Working Groups we are going to put out a call for individuals to participate and to contribute and then we will establish this taskforce sort of not only establishing it formally yes it had been created but also populate it and agreeing on a mandate and agreeing on the next steps. Is this correct?

BRIAN NISBET: Yes.

AUDIENCE SPEAKER: There is a comment from the Jabber here. The purpose of the review team is to explicitly verify by data needs to be collected, published and how it is used by third parties. We will take this basic prerequisite very seriously. Unnecessary data does not need to be collected at all.

WILFRED WOEBER: Correct. And my thanks to Luts for that one. The former reason why I have been rather not too specific is based on the fact that this review team has only very recently been installed. It has only a few weeks ago started its activity and after doing sort of a double check with the members of the team and with the Chair of the team I was instructed to just report the existence of this review team here and at the moment not to go into any particular aspects, because some of these aspects are still under discussion. But again, thank you to Luts to sort of bring it onto the table in a personal capacity.

If you are interested in following up what's going on, the texts and documents we are currently working on, they are actually accessible in a particular section, work in progress, which you can find at the URL that's in the draft agenda.

Okay. And with this, I think we can wrap?up. We didn't make good any time from the delay but at least we didn't use too much more time than we thought. And with this picture from a Portuguese beach a few months ago, I'd like to wish you have a nice evening

NIGEL TITLEY: I have just been asked to remind you all that the BoF is started and is running downstairs if any of you is interested.

WILFRED WOEBER: Okay. Thank you very much. See you in Amsterdam next year.

(Applause)

LIVE CAPTIONING BY MARY MCKEON RPR

DOYLE COURT REPORTERS LTD, DUBLIN IRELAND.

WWW.DCR.IE