THE DNS WORKING GROUP SESSION COMMENCED AT 4 P.M. AS FOLLOWS:
CHAIR: Good afternoon, ladies and gentlemen. We have a slight problem at the moment so first of all welcome to the DNS Working Group. And Jaap is going to be chairing this first session, the podium laptop is completely dead and there is nobody here from the NCC text support to fix this. So we are not quite sure what is wrong and the very nice people who are doing the stenography, we will be starting in a couple of minutes,.
CHAIR: All right. Just do the usual stuff, like this is broadcast on the Internet and so if you don't want to be known just don't say them because everybody is listening in. It's also, do we have a webcam here? ?? so it's all being set here. Everybody is listening in. There is people, Japp, in the Jabber room it looks like ?? there is another small change in agenda, Joao is fighting the gods and he wants to be at the end as far as possible so, hopefully things start to work at that time. And did we have some other stuff on agenda? Oh, ah, yes, and there is the switch, he also was supposed to be here today ?? somebody else. Dave Knight. Sorry about that. It's confusing. I don't think I can wait any longer. The next one ?? Arab. Do we still have ?? yes, I see. The next ?? the first speaker is ?? now if you can get your microphone switched on, then I will say that we don't have any action items or other things as far as I remember. We have two action items and you are going to do them. This is the agenda of today's ?? now we are moving too fast. No such thing as a free lunch. Here are the action items and it was Peter Koch to update something, and I have got the details ?? we move to ?? this is going to take too long.
ANAND BUDDHDEV: Good afternoon, everyone. I am Anand and I am the DNS Services Manager at the RIPE NCC, and I am going to give you a quick update on our services. Just a quick introduction to the team. Some of you know us already, but some of you may not. There is on the left well gang who is actually in Amsterdam this week, myself, and [Suart Ostike], he is moving to our IT team starting from January, and so we will have a new engineer starting from January. So that is our team.
Just a quick introduction to our services. The RIPE NCC allocates IP addressed and we do the reverse DNS for this. We also manage all the [operations] of the K?root server. We provide secondary DNS for some cctLDs, the developing ones that need our support. We also manage the ENUM zone, E 164.arpa. We have an instance of AS 112 at the AMS?IX, and that is an AS which is used to black?hole PTR queries for private IT address space and finally our department is also responsible for providing the NCC with DNS services so we manage all the zones and keep them signed and all that stuff.
So, I will start with reverse DNS. We have been very, very busy this year and we have overhauled our infrastructure completely. Previously, we had just three single servers handling reverse DNS and forward DNS, and we have now transitioned this into a multi?server cluster and the site is operational at the Amsterdam Internet exchange, we are using 32?bit ASN to announce prefixes for these DNS services, and this cluster actually hosted the in?addr.arpa zone, the IP 6.arpa zone and all of the other RIPE NCC forward and reverse zones as well as reverse DNS zones for the other RIRs.
And this is our first site. At the end of this year, we hope that our second site at the London Internet Exchange will also be up, and this service will be Anycast.
We have also been quite busy making changes to our provisioning system, we have some software that translates domain objects in the RIPE database into DNS delegation, and this software is quite old. And it lacks some features that we would like to implement and it's not so easy to actually build features into this software so we have rewritten all the software and we hope to deploy the software in December as well, at the end of this year. Deployment of this software depends on an action in the database Working Group, that is 59.1 and this action item is to clean up the RIPE database of domain objects that serve no purpose; for example, there are several /24 level domain objects which exist and they also have parent zones at the /16 level and the current software tried to filter out these domain objects which have no purpose but the new software is designed not to deal with this because we prefer that the database be the source of the right information, so this action item in the database Working Group is going to clean up these objects and then we shall deploy our software.
Once they have deployed the software we plan to release some interesting new features. We want to be able to accept glue and DS records for ERX space. This exercise is being done in coordination with other RIRs and there has been a slight delay, so this feature is not going to be available until sometime next year.
We would also like to support range and notation in the RIPE database. Well, we already kind of do. If you want to do delegation for several /24 objects you can submit a single object and use the range notation and then the RIPE database actually decomposed this into individual objects and this works fine the first time but then if you want to update all these objects, you have to submit individual updates for each of them. What we plan to do is to not do the decomposition at the database level but to leave the domain object with the range notation in it and let the provisioning system do the right thing, and if one wants to update all of one's delegations, they can submit an update for this one object.
We also do RFC 2317 delegation. This is for PI IP address space which is smaller than a /24 in size. We have to do this manually and it's a bit of a process, it's a manual process and sometimes things can go wrong, people make mistakes, and it also takes up a lot of our time, so we plan to support this using our provisioning system so that we don't have to do by hand any more.
And finally, we would like to make improvements to the delegation checker. It's been running in its current form for quite a few years now and we'd like to update it and actually separate the architecture out of the RIPE databases so that it's easier to maintain.
We have also been busy in the DNSSEC area, we have made several changes, the most important one was that earlier this year in June, we introduced new DNSSEC signers, these are secure 64 DNSSEC signers and they are FIBS 140?2 level 2 certified. We also published an updated DNSSEC policy and practice statement and this is available on the RIPE NCC website, and we also did our first key roll?over under this new architecture in September, well we planned it for September but we ran into some problems, there was a software bug in the signers which generated signatures with an inception date of 1st January 1970 but also an expiry of 1st of January 1970, so these signatures were not useable. Fortunately, we detected this early enough and so we delayed our roll?over until October and once we had an updated software release for the signers we applied and did our roll?overs.
One of the things we realised during this is that no matter how much testing we do and how much advancement we have had with DNSSEC, there is still remained issues and we need to be even more vigilant, I think, and we can't be complacent.
So I was curious to see what would happen to DNSSEC in the reverse tree. I looked at the numbers and it's interesting; up until October 2009, which is last year, there was a slow and steady growth and then, all of a sudden, there was a huge jump in the number of DS records appearing in reverse delegations and at the moment the current number is 387 so we have 387 signed reverse zones, that is still quite a small number but it shows that people are starting to take more notice of DNSSEC.
Our operation operationed of K?root are stable we have 18 instances throughout the world and we often see traffic levels of up to 25 to 30,000 queries per second across the entire cluster.
K?roots also has an IPv6 address which you can see at the top of the slide. And IPv6 traffic makes up approximately one percent of the total traffic that arrives here. Up until last year, we had an approximate query rate of 100 to 120 queries per second, but this year we have added several more IPv6 peerings and some providers like Hurricane Electric have kindly given us IPv6 transit and that is broad more IPv6 traffic towards K?root. And you can see that in this graph, there is a sudden rise when we actually added the transits.
Some numbers about what is being happening about DNSSEC. As we expected the outbound traffic from the root servers would go up and that is because the average packet size would go up with DNSSEC, and we have observed approximately a 30 percent rise in our outbound bandwidth. Currently, we push out about 80 megabits per second of traffic and at peak times 120 per second. Despite all this traffic, there isn't much TCP traffic. This is a graph of what happened to TCP. It's not particularly spectacular but we had no TCP traffic before a certain magical date, and then the DURZ was rolled out and went up to about 40 queries per second and it's been steady since then. So there is not much more to say about TCP, really.
So we are busy with lots of ideas going into the future. What we would like to do is we would like to be able to analyse pcap traffic from K?root and publish results from some of of this analysis, so what we have set up back at the NCC is a storage system where we can store up to one month of pcap files on a continuous rolling basis, and obviously it's no good to have data without being able to do something with it so we are examining ways of being able to analyse this data very quickly so we are looking at something like hadoop system to analyse lots of data and then publish our results from this analysis. We would also like to deploy some more K?root instances. We are cooperating with one of the RIRs, AfriNIC, under their route server Anycast copy programme and we have deployed one instance already and that is in /TK*UR slam and we are looking at deploying another instance in Cape Town, and we are also looking at perhaps deploying an instance in key he have in Ukraine.
We also want to expect the footprint of the global instances. We feel that some of the regions are under?represented, Latin America, for example, and so we are looking at ways to expand there, and we are trying to determine suitable locationed by analysing the current data we have and looking at peerings and traffic availability and transit availability and we are looking at doing these expansions in 2011. So that is a quick update from me, I am happy to take questions from the audience.
JIM REID: Just some random punter off the street. Could you clarify a bit more about the plans for signing of VRX space. I see you have got the new provisioning system coming into place very, very soon now, but how soon after that is installed do you expect that the holders of VRX space will be able to get their reversal and signed or integrated into the rest of the signed version of.arpa, if and when that ever comes about.
Anand: The RIPE NCC has already signed its zones and some of the other RIRs like APNIC and ARIN have already signed their zones as well. And we are all working towards preparing our provisioning systems to be able to accept glue and DS records, once all the RIRs are ready with this then we can start accepting DS records from each other's space. I can't give you an exact date but it's probably going to be somewhere mid?next year. I see George.
GEORGE MICHAELSON: We are very aware of the fact that you guys have been holding out a long time waiting for us to do this so Anand is right that we can't offer a committed date yet, we are very conscious he needs service and we are hoping to get that done next year.
JIM REID: Thank you.
Anand: Thanks, George. Any more questions from anyone?
AUDIENCE SPEAKER: Greece. If you could tell us a bit more about how you accept DS records for IPv6 ARPA today I saw in your slides that you have around 400 or so signed delegations and you have delegations in ??
ANAND BUDDHDEV: Yes, LIRs are able to submit domain objects into the right database and if they want their DS record to appear in the parent zone, then they should add a DSR data attribute to the main object and this is possible for IPv4 and IPv6 as well as ENUM zones.
AUDIENCE SPEAKER: This is something that is already happening with your provisioning system?
ANAND BUDDHDEV: Yes.
AUDIENCE SPEAKER: OK. Thank you.
ANAND BUDDHDEV: OK, thanks everyone for listening.
CHAIR: There is a lot of agenda change, ha?ha, and we swapped the last two slides in the hope that the deem engods will work out before that. There is now traditional IANA report done this time by.
Johan Ihren: Perfect. I work for /TKO*PL I suspect most of you know who we are and what we do. DNS stuff in general. So some of the most recent DNS stuff was obviously the IETF meeting in Beijing last week. One of the problems with DNS stuff is that at IETF in particular, it isn't really only located in the particular DNS Working Groups but it's sort of spread all over. But it's impossible to cover all of it so this will mostly focus on the actual DNS related Working Groups although I will try to touch upon some of the DNS stuff that seems to be going on in some of the others although I must admit I am not able to cover it all.
If we begin with DNSEXT, there is one document called Draft Faltstrom URI, which is ?? I must admit this is not my ?? the easiest document for me because these NAPTR things, are in spite of me having worked that through many times, still something that tends to confuse me. But SREs, on the other hand, are quite simple. One of the major points is try to leverage from the SRV record to something more general, and that is sort of achieved in a good way. The other part of this proposal is to try to work around one of the problems with the traditional NAPTR records which is that if you want to locate a service through a NAPTR, you have to query for the entire NAPTR thingy and then sort of sort through the various NAPTR records to find the one that identifies the service that is of relevance to you. In this case, whether URI record proposal you would be able to, instead, query directly for the stuff that you are interested in and thereby speed up the process or make it slightly simpler. Whether it in the end will be just URI or the URI followed by a NAPTR query, doesn't really matter; it will still be a simplified process where you can get away some of the ? of the NAPTR. It's not really clear what will happen with this because once you start down on this path of trying to identify services, as in the under score http, you can go in various directions. One direction is to aim for the widest possible applicability of this and then would you, somehow, need to also be able to serve unregistered services, things that do not have an official code point in some sort of IANA registry, for names like underscore http or whatever, as in underscore next killer AP. That is one possible direction. Another is you may want a tight mapping between this U R E record going from the service as in the service you are looking for and the point of service and then possibly the underscore notation would not be tight enough and we may need something else. So it's not that this is a clear?cut proposal where everything is done; it's more that this is something that is sort of showing that there is more work to be done because the NAPTR and the SRVed were not the end of the story.
Next draft, identical resolution is, how shall I put this? A very common thing that tends to happen in the IETF is that there is talk and then there is more talk and then there is more talk and talk continues and after having talked for a very long time it's sort of clear that there is no agreed upon problem statement. In some cases, that doesn't really stop people and they continue talking, but in the name of reaching closure, sort of stepping back and agreeing on problem statement is probably the right way forward, and in this particular case, which has to do with variance and ail Yes, sired in DNS, sort of intense debate for more than a year now, this is the attempt at actually getting down to the problem statement and agreeing upon a problem statement as that is a necessary step to be able to actually move forward on this. The interesting question here and I certainly don't have the answer, is whether this is sufficient close to being a workable problem statement so this allows us to move on. If it does, that is fine. We don't know yet. If it doesn't, well the message from the Working Group chairs was that, in that case, we will basically just toss this and say we are unable to agree on a problem statement in this space and it will be sort of tossed out the door and we will hope that some more research?orientated organisation takes it on, be that IRTF or something else. This is presently under intense work and there are hopes for a new versions within weeks time, as in before Christmas, etc, so hopefully this will converge very quickly now.
Third draft: Kitamura. His proposal regards alternatives for improving efficiency in the current environment of issuing both AAA and A queries for addressed, as in the typical behaviour, if you want an address and you issue the right ? into your LIB C or whatever, this will ?? into first AAAA query and then an A query and you get the response from that and depending on what transports are available and what your preferenced are, you will choose one or the other. But they are serial, so he wanted to improve on this by trying to void the serial nature of these two different queries and he had a bunch of different alternatives here. Alternative here number A, two queries in one packet; very cool idea, I am all in favour of this. It's sort of research and I am sure it will not work out but it's still still a very cool idea.
B, a new combined type, as in you basically query for the meta?type address and then you get all the stuff back. A very cool idea I am in favour of this and I am sure it will not work out
A. The third proposal is to query for AAAA and return mapped v4 address, not as cool. It's not really clear here that there is a problem to be solved, and this is unfortunately the sort of recurring theme for this IETF. The problem statement is based on basically on some people's, including the authored', presumption there is a latency issue here that is unbearable and this is really a problem that needs solving. It's not as clear to me that that is actually true but if you go with that, I think he has actually sort of explored the all terne I was in a nice and interesting way. That is enough for the IETF to take action here really. So there was a message back that we had to do the traditional thing and step back and look for the problem statement first.
Next draft. Vixie. DNSEXT res improve, as in resolver improvements. This is sort of a mixed back of things. One is to automatically revalidate NS records upon expiry. Another is to try to optimise ability of returning annex domains for stuff that hasn't been queried for explicitly, which is something that recursing service don't do today. Third, we have a spelling error, I am sorry, is to improve on the security of the NS records by issuing extra validation queries when you get a referral from a parent to a child, you immediately reissue the query explicitly to find out whether these are really the name servers for the child or the parents possibly have broken assumption of what the name servers for the child are. So this is something we don't do today.
This is the same thing again: No clear problem statement. I remember the IETF in Stockholm about a year?and?a?half ago, I think, when we discussed different alternatives for dealing with the ?? commence the attack and out of that work and those discussions came a plethora of schemes for how to deal with this and if we just tweak the logic a bit over here and change the entire world Internet infrastructure everything will be much better or if we do things backwards things will change and we move the problem elsewhere and the outcome of that discussion was no, we are not doing anything, which I absolutely think was the right outcome because the DNS is sort of robust but it's robust because it's carefully thought out and not necessarily the right thing that here at the eleventh hour whenever we find a problem or perhaps even not find a problem, we improve the Internet experience for the world by just going around tweaking stuff unless something is really, really broken and it's not clear that it is in this case. So again, no clear problem statement.
And this brings us to out of DNSEXT over to what is called KIDNS, keys in the DNS and the keys here are not DNSKEYs but rather other keys or keys and certificates in general, so that the underlying idea here is to put some crypto stuff that you are interested in, distributing from you to others, put it into the DNS, sign the stuff and then based on the DNS signatures, applications will be able to trust the key or the certificate. This is not exactly a new concept. We have had this for years. Apart from the DNS stuff we have had the cert record for a very long time which is just an encoding of certificate as an DNS record, no interpretation. There is also very old draft by Jacob about this so it's certainly an idea that has been bounced around for a long time and there was apparently a BoF about this in Maastricht, I wasn't there so I just heard about it, and there was a new BoF here which is what this is about, and there seems to be some sort of general interest and enthusiasm about this, so it's very likely this will become a Working Group.
It's obvious that we need DNSSEC for this to work out. However, it's not really necessary or rather it's not nearly enough just to have DNSSEC because it's also very important that end application, the client, is able to know the security status of the answer and in the DNSSEC case we are going from unsecure to slightly less unsecure even with the last mile not being secured and that is still an improvement. However in this case we are going from keys and certificates that we authenticate by some other means, as in they are signed by a CEA or something, and if we are going away from that model to model based on it being signed in DNSSEC, well certainly unless that last mile is really, really thought out and secure and everything is in order, we are really not moving in the right direction. So, this is actually not solved and that doesn't mean that isn't a viable discussion and thing to work on but it's not clear?cut exactly how to get all the way here.
Another thing which is very important here is that if we look at the cert record again, it's just transport, as in we take the certificate, we encode it in the DNS, we send it away and then it's picked up somewhere by a recursive server and presumably used. Whether to trust this certificate has nothing to do with DNS. DNS is not providing the authentication or the veryification or the security of the record, it's just a transport infrastructure, so the authentication is out of band. Here, the intent is explicitly to change this around and, instead, use the DNSSEC infrastructure as the trust path, and that sort of changes everything. Not necessarily a bad idea, but also not necessarily a good idea. And in particular, not necessarily a good idea in all cases. In some cases but not in some other cases. In some cases, it could be that the name path, the delegation hierarchy is exactly right at a trust path. In some cases, it just isn't. So there are really important issues to explore here and I am not in any way against the Working Group or the proposed Working Group and I think the work is highly interested and as I said, we have talked about this for ten years almost, but it's not done yet, there is lots of interesting issues to be figured out here.
So I think something will happen and the mailing list set up for this is, well, in test.
That brings to us DNS?OP, and that has one document is the key timing document which happens to be something I am participating in. It's been a long time coming and it's still not really cooked because it suffers from the problem of trying to track an evolving topic. In spite of that we have reached a point where we ?? the discussion with the Working Group is that even though we haven't covered all the things that have been discovered as work progressed, perhaps it's more important to actually publish, soonish, in the sense of trying to strive for timeliness because we already have software implementers referring to the draft and that is not for a draft that may change or disappear. On the other hand it could be argued that documents that come out of the IETF should be perfect or at least thought to be perfect at the time of publication and that is not the case here. So judging those two things back and forth the consensus seems to be that publication now is probably the right compromise, given that we clearly recognise that this is not the final statement on this topic and we do need to initiate work on this document more or less immediately to try and cover the remaining basis.
The next draft, living good. This is interesting. The proposal here is that some fraction, Jason works for come cost so they have a large customer base and are able to do statistics for what percentage of their customers are broken in one way or another, broken not as in the customer being broken himself but his Internet connectivity being broken in one way or another. And he had some numbers about low percentage figures. The problem is even with a low percentage figure, when you have customer base measured in many millions that is still a significant number of entities at the end. And apparently, some of them don't have working v6, so when their system library is happily query for AAAAs and they get them back, there are situations where the application whatever will try to use this perceived v6 transport to go for the AAAA rather than the A and things don't work out. So breakage occurs. This is clearly exactly the same problem that Lorenzo spoke about before lunch. The idea here is that because these are customers and because the ISP in this case, be it Comcast or someone else is able to identify the customers, they are in a position to implement some sort of filtering based on known inability to deal with the AAAA that is they get back so the idea would be if they know that that customer and this one and the one other here will break if we send him AAAAs, we will improve things by not sending them AAAAs. If you query for an A you get an A back and AAAA you get AAAA back unless you are on the block out list. It's an attempt by the ISP in this case to shield the customers from stuff that they perceive will not work out. So far so good. However, my view is that this is not necessarily worth it, in spite of the customers being measured in thousands, because there is a small amount of lying here. It just isn't the right thing and it would be difficult in more and more DNSSEC aware environment to actually get this to work out if the end customer, for instance, does validation, I know that is very infrequent today but it's something they could do and just not being ?? not being honest about the DNS information that you send back to your own customer that asks for something, strikes me as not necessarily a good idea. That is sort of the experience I get in a hotel and I hate it. It's the experience I get at an airport and I hate it. If I got that type of experience from my ISP I would be absolutely outraged. But then perhaps that is just me.
The final topic was name server control protocols and that is something that has been kicked around for quite some time, and from one point of view, the issue is very clear; there is a bunch of large scale DNS operators and Anycast operators that really think they know what they want as in they want a way to manage all their name servers and configurationed but there are also others that, well, there is always others that have ?? well, other ideas of what they want to achieve and when you haven't sort of figured out what everyone wants, you are stuck in this problem statement not necessarily 100 percent clear situation. And I think we are out of the woods but it's not really sorted yet. The first proposal here was by Mr. Cong. It's based on you can snippet if you like, as DNS sound data, and if you encode it as DNS sound data you can transport it to another server as in a slave server over the standard DNS protocol and then the remote server could do some magic and sort of transform this DNS Sony information into DNS configuration information and sort of do manual stitching on this, finish it all off by kicking himself in the butt and reload the configuration.
This has been tried before. It's horrible. And it's, to me, very much the case that if you ?? the only reason why this is encoded in DNS protocol is because the DNS is available. So it's a bit of laziness here that, oh, we have this nice transportation infrastructure and we need to do some transportation so let's sort of transform this stuff that has nothing to do with DNS data as such but let's sort of transform it so that it's sort of half looks like DNS data and we can sort of shove it through the same infrastructure and then we don't have to invent our own. My guess is that if you take the problem seriously and you sit down with a requirements document and you design something from scratch, it's unlikely would you come up with this design. The only reason you have this is you are trying to avoid doing the real work. We have seen it before and personally I don't like it.
The other proposal in this area is to also something that is not exactly new, Dickinson and Stephen Morris they have presented this, well, many times. They are refining it over time and, in particular, after we got the requirement statement or the requiremented document for name server controls protocol they have refined it more to those requirements, which is fine. It's built on top of net come and that is both a strength and weakness, it's a strength in the sense you get lots of sense for free, authentication and stuff, which is good; on the other hand, well, you get XML and all sorts of stuff and have to pay for it. That said it does map to requirements document and, well, it is a viable solution to this problem, I think. I believe this is of the two, this is clearly the one I prefer and they have working code and it seems to, well, it's really nice, actually.
And that sort of concludes the Working Group stuff.
Just to finish up here; this is from the charter of the MIF Working Group, just to sort of point at stuff in other Working Groups and the MIF Working Group is chartered specifically to look at problems that may arise from devices that have multiple attachment as in multiple interfaced that are connected in different directions, be it laptop both ethernet and wireless connection. What have you. More and more deviced are like that. And guess what? It leads to problems. My guess is that this Working Group is not ?? will not be able to sort of close those problems but it's still needed to at least identify the problems and I think there will be all sorts of interesting new issues that people figure out over time as they realise that there is no such thing as well the inside and outside and firewall in between because in the future everything will be both outside and inside more or less at the same time etc., etc. So the problems basis is transforming itself. If we look at this not from the security point of view but rather from DNS point of view and talk about things like split DNS it will be a mess, so my personal statement is don't do split DNS and you are fine but I realise that is not the answer for everyone. And here are a few other things that seem to be doing stuff with DNS in various ways. So I can't say more about them because I wasn't there, but it's clear that we have this ongoing, we have this ongoing trend that putting stuff into DNS is always the in thing to do and you don't have anything else to do.
Steve Crocker, he pushed for his DNSSEC history WIKI, which I think is a good idea. There is a lot of information and details in the heads of the people who have been playing with DNSSEC for many years and was part of this process, and it's not taken down anywhere. If we are able to capture at least a fraction of it in some way, I think it will be good. It's not clear to me that he will succeed but I certainly applaud the attempt. So go there and make a contribution.
And that concludes this presentation with the announcement that the mailing list that we all know and love is no longer there, and we just have to adopt to that. Yes.
JOAO: Let's see how I say ?? I don't like this presentation. Because it was a report and yet there was a lot of personal opinion.
SPEAKER: I agree.
JOAO: It's one view. I got particularly boiling when you covered the topics of DNS server configuration.
SPEAKER: OK. I have no intention of beating you up.
JOAO: You'd win. You criticise ?? in line configuration over DNS.
JOAO: You don't say otherwise it is making use of something that is there.
SPEAKER: To some extent, yes.
JOAO: You support the alternative one, that is actually something that was done because it's already there. It's using that because it was already there and people thought using an IETF protocol would make it easier to deal with the AST, which is a crap explanation.
SPEAKER: I agree with that.
JOAO: There is a bigger problem, what you have said to me hints you haven't read the stuff. You pointed that the problem with that alternative to being the cost of XML, that is not the cost ??
SPEAKER: That is the cost I am talking, the complexity.
JOAO: Just the language is an RFC or maybe it's an ? that 250 pages long.
JOAO: It's unparsable by a human being, OK? There are no readily available implementationed and the complexity of what is written down there is such that if ?? if a solution like that is to be chosen, we are going to be suffering for that choice for many, many years to come, in terms of actually getting an implementation that gets things right because the level of complexity involved in that one is so extreme that none of the people in this room actually have the expertise, not even the vendors in this room are producing the name servers have ?? to probably to do the thing because it departs so much from the DNS that getting these things is going to be nearly impossible. So if you don't like the first, at least don't support the second; look for a third alternative.
SPEAKER: I agree with that, that is fine. I want to clarify that my objection to the first one as in this one ??
CHAIR: We are not really discussing that ?? I really want top get on with personal report and that is what it is and we are running out of time discussing it.
JOAO: Mr. Chairman, I agree with you. But that had to be put here.
CHAIR: We are other people waiting for them and I want to stop this and also I mean, I have might have private stuff to say about this as well but I won't. And I really want to get on, unless there is some other questions, we continue.
Suzanne: An entire different topic, Suzanne wolf ISC. The identical resolution draft and DNSEXT, I want to add to what Joao said, as the principal editor I can say there will be something ?? there will be a draft, it's not ?? in fact the question that was posed to me when the agenda was in process, are you ready to go to get up and ask for last call. No it needs some work, it comes out of basic questions of what do people mean when they ask for DNS to treat things as the same? And it turns out that that is the part that needs refinement, that needs to be clarified before we have a clear problem statement. The chairs have been absolutely adamant there will be no work on be name or changing C name or any of those things until there is a problem statement, so anybody who is interested in that set of problems should go review the draft and help get it to the point where it's ready for last call and it provides a basis for deciding among possible new technology or existing technology because it will say either, there is no problem here, there is a set of problems here we can't solve or there is a set of issues here we can solve, here is how we go forward.
Andrei: Well, I just had a phone call with on DNS and we should be chartered in like two weeks but before that, we will change the charter to include the the securing the last mile, that it's not the work for the Working Group but we should be available of that and we will be sending the update charter to the list very soon, and there are some steps we are ?? we are doing or will be doing to manage the amount of e?mails on the mailing list, so it should be ?? I hope in near future it should be more manageable to read the list and participate.
Peter Koch: A procedural remark or request rather than commenting about the presentation and that is something that is going towards the audience. I appreciate the like personal sometime style of the report in this case because I think it was kind of a change. As a chair I observed we have some discussion on the topics here and although we can't really reiterate or shouldn't reiterate or increase depth of the discussions we have at the IETF, I or we as chairs would like to have feedback from you guys whether or not the reports this way or another are helpful to you and what we could improve or do different. So, to that extent I'd like to thank Joao ??
SPEAKER: Making the experiment.
AUDIENCE SPEAKER: Exactly. Here you can grab Jim, Jaap or myself any time throughout the week and give us feedback on this or other topics. Thank you.
CHAIR: Thank you.
CHAIR: Next up is Dave.
Dave Knight: Good afternoon, I work in the DNS group at ICANN. This is DNSSEC in the root zone signed TLD in the ARPA zones. I see the title has been shortened in the agenda to, a better title that a lot of people was thinking oh, my God not another root signing presentation.
So I will talk about the project to sign the root, and the work that has gone on since then.
Also, Jim asked me to point out the hotel are turning up the air Co. in the room.
The project to sign was a cooperation between ICANN Verisign with support from the United States department of commerce INTA. The rolls in the project, ICANN manages the key signing key and accepts DSR receipts from TLD operators in the same way it accepts delegation changes today and it now does this today. Verifies and processed those requests and sends them to the DOS to be authorised and to Verisign to be implemented in the root zone. Check ICANN is implementing the verification processed and procedures. Verisign's role is to manage the root zone and the key used to sign the zone. It incorporates changes suggested by the IANA into the zone and it publishes that to the route server operators.
The project began in August ?? or announced formally in August 2009. The plan received its first airing when Joe and Matt from Verisign presented it at the RIPE meeting in Lisbon last year. In December we began more of the outreach effort and launched a website around the project and the first signed root zone was produced internally at Verisign. That was then published over the next few months, incrementally to the route servered in the form of a deliberately unvalidatable root zone. And then in June, we had our first key ceremony in our facility in Virginia and at that ceremony, the people who had volunteered to take part in this from the community from initialised into the process, the facility was turned up and we created the initial key signing key for the root zone and also processed the first KSR from Verisign and that is the format in which Verisign sends the zone signing keys to be signed by the key signing key. And at that time as well the first DS records submitted by TLD operators were added to the root zone.
Here is a picture from that first ceremony. This is all of the ICANN staff volunteers and guested who were there that day.
In July we had our second ceremony at our other facility in California. During that facility, the site was turned up and the second KSR was processed. After that had happened, we could then go ahead and remove the deliberately unvalidatable part from the signed root zone and the root at that point was signed, it was fully validatable and published to the root servers. Also, thereafter, the Trust Anchor was published by ICANN.
In November, just I guess two weeks ago, we had our third ceremony in Virginia where this is now business as usual, we processed the KSR for the first quarter of next year. These are the people that ICANN and Verisign and consultants who were involved in the project. And yes, the root is now signed and it's now part of standard operationed at the IANA.
In February we will have a ceremony in California and then in May there will be another one in cull pepper, the dates for these are now set.
TLD operators since June have been able to submit DS records for inclusion in the root zone, instructions on how to do that, how that differs from normal delegation changes are on the IANA website, as of today we see 64 TLDs are now signed and of those 49 have DS RRsets in the root zone. We have a web page with some statistics on that.
And since finishing the root signing, I will start talking about what we are doing in other areas: ARPA has been signed by Verisign since March. We are currently going through the administrative procedures to get a DS RR set, we expect that to happen shortly. ARPA's children are also signed. E 164 has been signed by RIPE NCC since 2007. The other ARPA children with the exception of in?addr.arpa have been signed by ICANN since April, so that is IP6.arpa, URN, URI and a couple of others, and the DSs for those have been in ARPA since September. This was mentioned, in?addr.arpa and IPv6.arpa will move to be served on a new set of servers operated by ICANN and the RIRs. Currently, in?addr.arpa is the content is managed by the IANA, but the zone edit function is carried out by ARIN and the zone is being served on the root servers. In 2011, this edit function will move in?house to ICANN and after that is completed, we will sign the zone. And we also will move authority service away from the roof servers to this new set of servers operated by ICANN and the RIRs.
IP6.arpa is already edited at ICANN and it's served on a set of searchers operated by most of the RIRs. It also will move onto a new set of servers in the IP6?servers.arpa zone. And any questions, please?
CHAIR: OK. Well, thank you for this overview.
The next one up, now for some real operational experience, we turn to Jaromir Talir, tell you how you do key signing roll?over, CR 1 to CR 512 so escaping all the 511 steps in between.
JAROMIR TALIR: So thank you. Actually correct pronunciation is ?? one of those weird Czech correcters.
Following the root got signed we decided to switch from NSEC to NSEC3 and my presentation will be about our experience with this process.
Actually, we started to provide DNSSEC services two years ago and that time NSEC3 was not available and not even specified and so NSEC was the only choice and I think that we successfully convinced all people that there is no real problem with NSEC with some security, but maybe the situation changed a little bit over the last two years, because we found out that some, some reduced ? tried to use zone working feature together with Whois by two cumulative resources, our stakeholders or maybe the good behaving registrars, convinced us to do something with that and the immediate choice was switch to NSEC3, so that actually it was a political decision; there was no technical problem with NSEC. And quite soon we found out that the only solution to switch for us was to make roll over of our KSK and we do that ?? we took that as an opportunity to, when root was signed, to force our DNS operators to switch validation, not from ?? not using our KSK but change the configuration to validate using root KSK and that was also opportunity to promote DNSSEC a little bit more and maybe last motivation was technical, that the algorithm SHA is going to be obsolete by next recommendation, I don't know if this year or next year, so we decided to use currently the best algorithm that was available, SHA 512 and we decided to use NSEC3 to /W*EUS out, opt out because we have quite a lot of secure domains, so the size problem is not a big issue for us and we thought it would not be good to lower the level of security in domain because the NSEC3 with opt out doesn't protect unsecured domains, which NSEC does, so that was the reasons why we decided to use NSEC3 without opt out. Regarding the other NSEC3 parameters that are necessary to specify, we just chose to look what is right now in a root zone for TLDs that already have NSEC3 and chose the parameters that you see on the slide.
We also had to upgrade all our primary and secondary name servers to resend versions, to support this quite new algorithm to upgrade sign or to have a more memoriam for the reason I will speak about lately and of course also upgrade our unresolvers to support, we have Open DNSSEC validating resolvers and internal. We started some sort of marketing campaign against DNS operators just to upgrade resolvers and change the configuration to validate through root KSK. Maybe more ?? for more than months, maybe two months, we tried to inform them in the mailing list in our company blog in some discussions in technical magazines and we also created some sort of test bed with NSEC3 copy of our current zone files so to be able to test how the situation will be after we all switch to NSEC3.
So as I said, NSEC3 is not compatible with our previous algorithm. RSA SHA 1 it's not as simple to roll over the AS key, with different algorithm, so is inevitable in our situation.
But here, we are not talking about the classical KSK roll?over as is specified in all documents; this is algorithm roll?over when we have to change new algorithm and the classical republishing method cannot be used here. Publication of just ?? pre publication of DNSKEY will make the zone bogle, so maybe these bullet lines are the most important information for my presentation today, so if you plan to do roll?over keep in mind that this is the algorithm roll?over is a little bit different situation. In reality, different servers solve the situation differently. For example, bind treat it as bogus zone, and Unbound does, which is RFC compliant, and bind in this situation is not fully complaint.
What is actual problem? There is a Section 2.2 in RFC 3045 which I am not going to read here. There is a lot of for each ?? for each one of, and actually we had to read this section 100 times to understand what is going on there and it means those three lines at the bottom and it is that you actually need to sign each RR set with new DNSKEY so you have to have double sign zone in this process before you publish new DNSKEY and of course, at the end you have to publish DS records upstream.
This is actual time?line of our procedure. We took about three weeks after root got signed to settle things a little bit down and at the beginning of July we did two changes in our zone file, in the first change we double sign it our zone file with old key and new key pair; and published the zone file and validated TTL for our signatures, and after sometime we inserted new KSK and ZSK and waited TTL and at the end of the day we sent exchange requests to IANA to replace old DS records with new one.
This was actually processed in two days, we didn't expect that it will be so fast, so that is good.
We planned more time but we expected it will take a week or more so we decided to do the second part of this process in three weeks later and on 24th of July, we did reverse operation so we just removed old KSK from file in one step and then second step we just signed zone file with new KSK/ZSK pair and in the end we switched from NSEC to NSEC3, it was simple, no problem at all.
What is the conclusion? All this process is manual, current tools, Open DNSSEC and ZKT which we use right now doesn't support this form of roll?over and this is maybe the question because I expect that with the speed of IT development and with the usual period of KSK, like two, three, four years, I would expect that every KSK maybe will be algorithm roll?over, that is a suggestion for this tool to support this feature.
We found out that testing is crucial, same as Anand, you have to test different vendors, different versions of the same software, all the steps separately, not just and result. During this process we also found some small bug in our inbind, between I think the five and six step when the old key was removed, the binder didn't return the AD bit. We reported it back and it was, I guess, fixed and should be released in a new version of bind. So at least we helped a little bit in this way.
That is all from myself. If you have any questions regarding what we have done?
CHAIR: Thank you Jaromir. I actually do have a small question for you, but maybe...
AUDIENCE SPEAKER: Sam Wiler. Your assessment ?? well, first, thank you for acknowledging that you all got the process wrong. I will apologise for writing those algorithm rules in so incoherent a fashion and I will once again disagree about your assessment about which resolver got it wrong. I think Bind is perfectly RFC compliant. I think unbound was unnecessarily pedantic. And I have said that before in response to Andrei's presentation at the org meeting. I wondering why you are all still perpetuating this version of it, if you think I am wrong or you just didn't get the memo.
JAROMIR TALIR: Andrei didn't said me that you had this complaint, so probably there was some bumping of wires.
AUDIENCE SPEAKER: To be clear, those rules that you quoted are rules for zone operators, that would be you, and you got it wrong. Bind is doing the right thing in that regard. Unbound didn't need to be as pedantic as it was. It uncovered the bug in your operationed that bind didn't uncover and that was useful to you but it didn't need to do that.
JAROMIR TALIR: Yes, OK.
AUDIENCE SPEAKER: I was wondering for the purpose of your NSEC3, how did you come out with 10 versions because we have DNS servers that require us to go with low number of iterations?
JAROMIR TALIR: We just worked towards what is current number of iteration of the TLDs that already have NSEC3 and they are in a similar size, we didn't do really on a like deeper testing or a research or what exactly ?? what is the ?? what is the best for us. So this is the answer, so if you ??
AUDIENCE SPEAKER: I note I think you wrote 68 bits instead of 64 bits for the ??.
JAROMIR TALIR: Maybe I counted back, so.
AUDIENCE SPEAKER: We have all infrastructure in our hands so we don't have any DNS operators to pose any restrictions on us. There is reason why we can't do what we want.
Steve cant BBN technologies. I will was a little surprised to see you moved to SHA 512, you are still using 48?bit keys, right ??
JAROMIR TALIR: Yes
AUDIENCE SPEAKER: If you look to for a guidance what size function you should use, that is really way over board and by the time you'd ramp up to an RSA key size that would be appropriate for that in terms of balancing it, you will probably want to shift to cliptic curve signatures instead so I just thought it was an odd combination. What motivated it?
JAROMIR TALIR: It could be, yes. You mean, sorry, maybe I didn't get your last ?? the last sentence, it was about the length of how long will we use the key or?
AUDIENCE SPEAKER: No it's just the length of the key itself. I mean, there are tables that suggest if you are using an RSA key size of you know so many bits then what is an appropriate hash function out of the can current family of SHA hash functions and for 2048 bit RSA keys 256 is viewed as an appropriate size. To go to 512, skipping the 384 whatever is in the middle would suggest a plan to use a very big RSA key in the future and by the time you get to a key that big you wouldn't want to have changed the algorithm fundally not just ramped up the key size so it would be so slow in the process, I found it an inappropriate combination, I was wondering what motivated it.
AUDIENCE SPEAKER: I just answer that. The thing is that the KSK roll?over is so painful that we don't want to do it again in the near future so this gives us a more space, and my ?? my assumption is that choosing higher version of SHA algorithm doesn't impose significant computing complexity, so it doesn't increase the size of the zone so well we just thought to choose the SHA 512 just for case we want to go higher in future and still we will not have to do the algorithm roll?over which was painful.
AUDIENCE SPEAKER: OK.
Peter Koch: Just to add to that. , Steve, if you suggest that the key should be really, really absurdly long, we need to remember that the longest ?? the maximum length of RSA keys currently is 4 K, so your remark would suggest that standardising SHA 512 in combination with RSA wasn't really helpful to start with.
Steve: No, I don't think I said that. What I would say is before you go to a 4 KR SA key you probably should have gone to an ECD key algorithim because just ridiculously large, we are going to change it I would like to change it once, I am willing to buy additional computal effort involved in doing SHA 56 because that is the biggest number that was there if somebody had SHA 1024 we would probably still have done that too, that is not a good way to make these decisions.
AUDIENCE SPEAKER: Thanks for disclosing the number, so I understand that 4K keys would kind of match the SHA 512 maybe roughly in terms of pre image resistance and so on, or strength, say. We do see 4K keys in the I would and the, though, and we can change key length without algorithm roll over so from an operational perspective, probably not from a crypto perspective that is probably making sense.
Steve: When you say you can change key lengths to go to 4K that is assuming nobody is use thing hardware that doesn't do T that is a software mentality that says I can use bigger nums. Somebody who might be doing this in a shy insurance fashion using a piece of crypto that might not do 4K key so I think there is an underlying assumption here which I don't believe is uniformly true, that won't break anything. But if I change hashal rimmed from SHA 256 to 34512, that is a big deal, I don't think that is a uniform assumption.
AUDIENCE SPEAKER: From operational point it's a big deal to change a size of SHA algorithm because you need to do algorithm roll?over and that means that at some point of time you will have double signatures of everything in your zone.
Steve: I understand that. I am saying there is an implicit assumption that increasing the key size of a fixedal rim is never as hard as changing algorithm and I think what is wrong in certain cases and yet that seems to be an underlying assumption right?
AUDIENCE SPEAKER: In which cases.
CHAIR: Maybe ??
Steve: When you are trying to use an RSA key size when the chip in hardware doesn't support. That is when.
CHAIR: Maybe we should have this discussion further off line. There is one question I wanted ask and that is that given this document about 4621 ? how to deal with the DNSSEC in practice and other people have remarked that actually algorithm roll?over is not there and maybe it will be added or there will be separate document but it will be nice to have your experience with that.
JAROMIR TALIR: I think it's there, that we already submitted some comments and there was some new versions of this and it includes our operational experience, I guess.
Andrei: If you are speaking about Olaf then it's there. And the new version after our experience is there.
CHAIR: Thank you for this. Operational stuff. The next speaker will be Basil. I forget his last name but ?? talking about he comes to tell about he actually implemented the G OS T which is a form of EC C curve cryptographic for doing DNSSEC in ?? it's better to be used in Russia and that type of region. The floor is to Basil.
VASILY DOLMATOV: Hello everybody. The topic of my presentation is to tell about the experience of integration across crypto algorithms in DNSSEC. Considering the environment of the previous topiced I will slip over first few slides because all of us just know about the root is signed and deployed, that some of TLDs are signed and some of registrars are DNSSEC aware now. When we tried to find register who can put DNS records into a TLD, they have had some troubles to find.
So the regions for the appearing GOST cryptology fee is very sensitive field and when cryptography some shadows and some darkness emerged. A lot of people are trying to impress you about how it should be done and so on and so on. A lot of governments are monitoring and controlling the cryptography and its usage and implementationed and so on. Some of them are doing that implicitly, some of them are quite explicit in that and for instance in Russia there are set of differences specific laws and rules how cryptography should be used and which kinds should be used.
Because in Russia so only certified implementationed should be used for public services or personal data handling in Russia. There are a set of rugs development cryptography algorithms called GOST, which is acronym for all Russian national standard. These algorithms are quite unknown to the community and the first step when implementing GOST cryptography was to make them public and known, for what purpose, for ERCs created and deployed and now everyone can get full description of algorithms, and analyses some of them are analysed for 20 years now.
Next step is put the DNSSEC crypto into the DNSSEC. For that purpose R C 5933 was created and deployed and now the algorithm called 12 was assigned and DS algorithm code 3 was assigned for GOST now full set ready for usage in the DNSSEC system.
You know that there should be not only algorithms and standards and running code also. They have fully featured GOST implementation and which was implemented by crypto put into the mainstream for a year or so, and the support of GOST algorithms is enabled in a last version, I think it's not last version of unbound is enabled in the bind, the corresponding part. We tried to show that these algorithms are ?? could be integrated in their ?? in the DNSSEC ? without any additional operational burden for the users or for DNS operators. We showed on their example of signing and cross signing several zones and checking that from the root chain. We took all our TLD, unfortunately the root is not signed yet. They have managed to find from the few registers we have access to only one who could transfer our DS record to the TLD and we took the only one DNS providers in Russia who is DNSSEC aware. So combining with three components we have got the following pictures:
We signed some zone, the GOST algorithms transferred correspondence DNS record to the org zone and checked the chain, the algorithm change on the way. You can see a bit, well, you could see that some figures are bold there algorithms, they change along the way from the 8 to 12 and the last record is assigned with GOST and it's checked OK. Another thing we tried to check is where finding the GOST algorithms in the in transit BoF so the signed the GOST one zone and the child zone was signed by ?? ? you can see again starting from the algorithm number 8, passing through algorithm number 12 you get algorithm number 5 as final point and cross chain again has no issues in that.
So how it's possible now to use GOST algorithms and who can be interested in it.
Well for Russia, it's obvious in, some implementation it is should be used because the law enforced the use of certified cryptography only, but at the same time there is an open implementation. In these algorithms could be used freely throughout the world, have no restrictions and quite ?? I think it's quite good, it's a little crypto graphic which is not yet present in DNSSEC except the GOST algorithms and for the ?? for the operational of the software, I said already that unbound is ready in the operating from a box; bind, I see people say it will be operating from the box starting 2011. Any other usage of the GOST cryptograph is possible use current version and nice package of Open DNSSEC for support operation of key preparing and key managing and so they also plan to implement GOST support in 2011.
So for Russia, all main building blocks are provided now. There is also certified version of software which could be used and now when Russian TLDs, three of them could be ready for DNSSEC deployment.
Finally, I would like to thank all of the people who helped us to create and promote these, some of them by advice, some by assistance, quite a bunch of these people are present here and I would like to thank them because without their help I could not manage to have ?? thanks.
CHAIR: Any questions for Basil? No. In that case, thank you very much.
CHAIR: The floor will be for Joao and looking at various stuff of DNS of course and to demo, I believe. Ways of getting the database running again. It seems it does.
JOAO: So this is a short talk on passive DNS and DNS DB and hopefully I will be able to show you a demo of what it does at the very end.
Some of you may already have heard about this, particularly if you attended the /WA*UR meeting in Denver. Just a quick overview. What is passive DNS and then what is SIE and DNSqr, which are kind of building elements that lead to DNS DB.
Passive DNS was an idea invented by floor Jan Wimer in 2004, captures DNS messages between servers as they go on the wire you basically intercept the traffic on the wire. Important thing here is it's between servers. Meaning that it's not between the user and its ISP server; it's between the ISP server and the TLD servers. The importance of this is you are not capturing user data so you don't have any of the privacy issues. Also, you capture only new queries and not the repeated queries. If you have thousands of people asking for google.com you don't have 1,000 copies of that; you only have the first one and when it expires again.
The data is captured whenever you have access to that work to capture it, so it's meant to be as distributed service. And in the case of SIE it's forwarded then for analysis. Eventually, processed data is stored in a database.
So these are three implementations that are available of passive DNS right now. DNS logger ?? I can't pronounce that ?? DNS parse and the SIE project which started at ISC in 2007.
In the case of SIE which is the one I know better. It's basely an infrastructure, a network shared that is meant to exchange data on security topics and in particular, DNS, one type of data that is exchanged is passive DNS. So sensor operators around the world are running these sensors and data gets replayed on to a switch. Think of SIE as Internet exchange. It has a big switch in the middle and people plug in their machines to exchange data so when we unplug data one of the machines what is there it replaces it into into VLAN on the switch. Anyone who is on that VLAN can listen later and do whatever they want. The idea we have is there are different channels as in the contempt of it. V channels, depending on what you want to see, you participate on a channel, a channel being a VLAN on that switch. So that allows to you select what type of data you are going to see. The message is that flow in these exchange point are in these NMSG format, it's a simple way of encoding data on the wire.
So one of these channels, one of these TV channels is the passive DNS channel and there are several of those. There is the raw one where you get the whole data and it's a lot of data. And then there are processed elements of the data that you can also listen to if you don't want to get the whole thing, some figures there, the traffic is around 100 to 150 megabits per second right now on that switch and then just put the other two VLANs so you can see the reduction that you can achieve by doing some of the analysis. You don't see everything; you see only part of it. Where it comes to software there is this DNSqr which is a module for ?? which is makes available for everyone to use. That is dedicated to passive DNS capture making it Yes, sirrier to implement your own if you so choose.
It allows for filtering so that you can define your own filterings and only listen to the traffic that you care about and captures it and processes the output into an MSC format that can be replayed. One thing it does as well is that it does IP level reassembyl. If you have packets that are split, it's a bit of a nightmare to try to track the little bits that are not the first one and make a correspondence so that you can put the big packet back together. The library does it for you. If you want to install that particular instance of the software instead of putting your own together, for Linux there is the censor there. For FreeBSD scripts, and everything is available that URL, the presentation is on the RIPE site.
You have all these and when you put these together what comes out the DNS DB, DNS database. It's what it says, a database to store records. It currently notes data from passive DNS and if you want to help it along its life, also zone files. As infrastructure it used ?? it's advantage is given you are writing data in a sequential way these database does it that particular type of work in a very fast way. And it's very simple database, it just has key value maps, which is quite OK for DNS, in fact.
You access this thing using a restful http interface so something that most people are familiar with. It used ?? J SO N. Usually using well interface like the one I am about to show you and now there is a commercial break while I switch laptops.
So, in order to use it you need a log in. Because this is in Beta you have to right to the developer to get access to the beta. If you don't know him or have his e?mail I am happy to be a proxy basically. I have logged in earlier, what does having access to this afford you?
The interface as you see is very simple, you have to choose you wants the results to be RRset or data, what type of results are you looking for, if it's not there you can type. You choose which domain name do you want to see and which bailiwick because you can find information about a given domain in several zones going down from the parent to the child, or you can also specify which bailiwick. I want to ask what the root knows about the root regarding DNSKEYs. You see the little thing spinning down on the left while it does it's work. This is not going so fast because of the network. What you see, what we have been told about all this time, the DURZ key which all versions of it, and you see ?? information about one it was first seen and when was it last seen. So you see the different variationed of it. And then eventually you come to July 16th at 00:54 and you don't see the DURZ key any more, you see the real key.
I have some queries now. This is one thing that I would like to mention about the interface because it got me first time. Every query you do during the session is kept on the screen so perhaps if you do one query and then you do another and you wonder why the screen is not changing it's because it's at the bottom so two options: You can either close that query from the screen or fold it, so to speak. So for instance, historical data: RIPE 61.ripe.net. How does RIPE manage this stuff? Well, apparently when ?? because they have to move from Amsterdam to wherever the meeting is, they use mapping using C names for the actual machines so you can see that before and up to November 14th, the Sunday, RIPE 61.ripe.net is mapping to a machine that is body in Amsterdam. ROSIE. Once the meeting starts which was at noon on the Monday then it becomes mapped to a machine in the RIPE meeting. So you can find out what is going on and so on.
Queries can be bit big. I am not showing this one to be an S ?? just to show you some particular sets of data change frequently and you end up with lots of data, so but even ?? retrieving 9700 took less than one second so it's a pretty good database. And that is what this thing affords you. I don't know if anyone wants to try out a query, just let me know. Otherwise we move on because that is what I wanted to show you today. If .cz people want to have a look at the keys. I don't know why the access to the machine is so slow today. So beginning in June, the reason you see dates beginning on June 24th that is when the system came up. There is no data before it existed, kind of things that happened. And somewhere along there probably Andrei will be able to help me, which one is the key that becomes longer, because just by inspection it's hard to tell in the short time. Anyway, that is what it gives you, if you want to try outlet me know and we can give you a log in. It's interesting for the bugging purposed what happened if you have a customer that said between 2 a.m. and 3 a.m. when you were fast asleep I couldn't access this place because your DNS was not working. Well you can find out. You can find out exactly what was going on. And you can do historical research like I did with the root. That was T thank you very much.
CHAIR: Thank you, Joao. Are there any questions?
AUDIENCE SPEAKER: Yes. Klaus. I wonder where you get your data from, where you have the prospective for the passive DNS, and second question, so what happens when the beta test faced will it open to the public or will it be a commercial section.
JOAO: I don't remember where the probes are actually. Keith do you know?
AUDIENCE SPEAKER: How do you get to a lot of data, are you giving applianced to Service Providers to get the data.
JOAO: There is a bit of everything.
AUDIENCE SPEAKER: How do you get the data?
JOAO: We do capture on our own servers, we have ? inside ISPs, the vast majority right now are in the US for practical reasons, you have to upload this data and our ?? it's much easier, you cannot ask people to give you the data and end up pay for the transport so we are planning to extend our RS C network to Amsterdam sometime next year and it will be easier for you to ship data from here if you so decide. There are some TLDs that are also sharing the data. So it's a mix of everything. As about opening it up, I don't think it has been decided yet to be honest. I think, knowing Paul, that the last ?? ask you to become a member of the SIE project which can from signing the papers to giving contributioned so it can be developed as well. Whether they decide to open it, whether Paul decides to open it even more, is something I don't know, and I am predicting his mind is kind of impossible task.
AUDIENCE SPEAKER: From AFNIC. Is there a way you can know from the recalls where it has been recorded? From the area set.
Joao: A way to what?
AUDIENCE SPEAKER: The set you showed us can you find back where they have been recorded from?
JOAO: No. At least not in the web interface. I am not sure if they store the origin of the data inside itself. Not the full data is stored, for instance one thing I have asked that is not being stored right now is the DL of the records. So maybe the actual probe that is in the data is not actually stored. I have to check.
AUDIENCE SPEAKER: What kind of ?? how much space do you need disc space or things like that.
JOAO: It's not much. The overhead is very small. The thing is how it maps things is very, very lightweight. So the percentage is very small overhead.
AUDIENCE SPEAKER: Thank you.
SPEAKER: University of Vienna. We have developed something similar, so some questions arose and how long do you keep the data and is there an inference where to place the censor, so for example, does it make the difference if a placement by sensor in different kind of communities, for example, into academic community or into commercial community or is there a difference in geographic locationed of these sensored, it would be interesting.
JOAO: The first question, how long we keep, by seeing the invoiced flying by I think it's going to be forever. Because truckloads of terabits coming in. As for seeing different patterns of usage and different community types I guess, I haven't had a chance to look, to map, it could be interesting research. I am going to bet that a lot of the things that happen are common, everyone used Google for ?? but there will be more specialised queries. I don't know I haven't looked at that data.
AUDIENCE SPEAKER: A question from IRC, from the University, from Basque country in Spain wants to know about third?party applications or organisationed using the service, do you produce any kind of visualisation or report based on the data?
JOAO: We don't produce any kind right now, we are concentrating providing the service and let people use it for whatever the purposed are examples of people using this data, anti?virus companies are minding these data in tracking for instance one of the channels I mentioned air available is fast flux, there are machines processing the data and applying algorithms use of domains which are usually controlled ?? for controlling BotNets. So, people who are in the security market do find this data very useful. I mean, these sort of queries I performed here are more entertaining than anything really and they can have operational value if you want to see why something happened but for systematic searching, you need more detailed analysis that just a human typing thing and that is what, for instance, the fast?fluck channel gives you, the typical people that use these data like anti?virus company, BotNet researchers that kind of thing.
AUDIENCE SPEAKER: Can you make reverse lookuped, I give you an IP address and you check which domains resolved to this IP address.
JOAO: I don't know if you mean database performs reverse lookup, no, it just listens for what other people ask. It's discovering network traffic, it doesn't initiate any queries.
AUDIENCE SPEAKER: Yes, but does your web interface allows me to query for IP address?
JOAO: That is a good question. I think I know this
JIM REID: What about 0 .1.1...
JOAO: I use this one for testing.
AUDIENCE SPEAKER: I don't want to know the PGR records but a typical example if you do research you find out a certain domain is used for phishing, then you find out the IP address of the web server for the website and you want to know if there are other domains used for phishing which resolved through the same server.
JOAO: OK, that is not done by this database because this is stored. There are people out there that are listening to a traffic and performing that kind of query, yes. But that is a different thing than the database. The database just stores traffic. OK. So actually, you can allow ?? the database doesn't ?? the database just stores traffic, it doesn't do anything with the traffic afterwards. If you want to do something with it, you subscribe to the database and I mean you get access to the database and then you use the data for whatever purposed. You can do that and if you look up like any given address, this is domain really, I don't know if it has support for that. I would suppose, yes. Let's close this one. Not quite now. You can perform that later.
CHAIR: OK. Thank you very much for this. I guess we will move ?? we are already out of time as usual. And the policies for the last ? of the session, has to be some ten minutes ago so we will see you Thursday morning.
CONCLUSION OF DNS WORKING GROUP SESSION.
LIVE CAPTIONING BY AOIFE DOWNES RPR
DOYLE COURT REPORTERS LTD, DUBLIN IRELAND.