Archives


The EIX Working Group sessioned continue as follows:

CHAIR: Welcome back, everybody. Are we all back? Kamran, are you all set?

KAMRAN KHALID: You know what this is all about, let me just scratch through the history part and jump to where we are today.

So we have got two vendor specific and separate LANs and this is present in ten different locations around London. We have platform resiliency, and ring protection mechanisms, MRP.

I have already mentioned we have got two separate LANs a that provides redundancy. However, we have noticed as time has gone on, ever since LINX has started, that it's been actually ?? not all members connect to both LANs so there is some disparity in the traffic on both of the different LANs and that is the case with members that aren't really present in London, have to long hall circuits into London, so we have had to rethink the which things are going.

We are going through a architecture review. There are a number of goals that we have for this, one of them is obviously to have more flexible link utilisation with the usage of shortest path routing and where possible, to avoid idle links on the network. We want to improve the failure protection mechanisms because this is also ?? one of the problems we face is, let's say, there is any problems with link protection mechanisms on one of the LANs, instead of being localised to the faulty hardware it can affect the entire LAN so it obviously causes a lot of problems for our members.

We wish to improve the scaling of the network, because with the current ring architecture that we have, if we want to add add any new sites it adds high level of complexity and causes many issues and also want to be able to have more flexible approach to resolving bandwidth issues and to provide new services into the future.

We have looked at a number of different alternatives, looked at TRILLs, but it's still quite new and there aren't really many working implementations, shortest path bridging but limited vendor support, we could stay with MRP but it's just not really much of a feasible option.

The choice we have gone for is VPLS, for the fast fillover, route protecting and the main thing is the traffic engineering possibilities.

The plan is to have a single redoesn't platform with two VPLS instances running over it and, where possible, we will be using our current dark fibre and DWDM infrastructure.

Currently right now, we are going through labs testing and seeing where things are going. Once we get the full design and specification of the new architecture we will be presenting it to the LINX membership for their approval. The plan is to then have the actual VPLS running over the spring and the summer of 2011, and then once we have everybody migrated over, we will be ready for the London 2012 Olympics.

So here in Rome, there is John Suitor and Jennifer Atterton and myself. So that is it. Any questions?

CHAIR: Thank you Kamran.

(Applause.)

We still have prizes left if anybody has a question? No. OK. Thank you. And next up is Akio.

AKIO SUGENO: I am Ako Sugeno from Telehouse. I would like to give you a little bit of update on the NYIIX. Number is currently peak traffic is 125 gig and the traffic continued to grow and there is the some sort of error in March, there is a huge spike but this is a simple error sorry. And currently we have 132 members and we are going to reach probably 140 active members, contractually we already passed the 140 members already.

Now, currently, we are upgrading our platform to Brocade MLX or MLX?E, because we need more 10 gig ports and also we can separate with 100 gig of course and we are preparing for the VPLS, just like the other exchange points are doing.

The good thing about MLX?E is we can have eight?port in the half slot which is great architecture. And also, we always try to replace the platform every three years.

Currently, we have four locations, number 8 Broadway in Manhattan, number B, 60 Hudson Street, number C 1118th Avenue, and D, 7 Teleport Drive, Staten Island.

We are going to expand to new location in Manhattan, which is the 85 Tenth Avenue. The one on the right?hand side, the dark, like black circle ?? square is ?, the new building 85 Tenth Avenue, is roughly one block away, roughly one or two minutes' walk. You see the new locations. Roughly, it's about like Tenth Avenue at Fifteenth Street, I try to put up pictures, it's like multi?tenant building. It's 10 stories building, high?powered and high?powered, high security. I cannot mention tenants name but they are very high profile organisation, including lots of the US government, and the Telehouse also is the new tenant on the seventh floor.

And we are going to have the new NYIIX POP, this should be available in January next year, platform is, again, Brocade, MLX?E?32, and we can support 100 gig if you can afford. That is it. Thank you very much.

(Applause)

CHAIR: Thank you, Akio. 3 minutes, 40, he is in line for the first T?shirt. Any questions for Akio? No. OK, our next up is NIX.cz on the new topology.

PETR JIRAN: So, good morning, ladies and gentlemen. My name is Peter Jiran, I am from Prague Internet Exchange, NIX.cz, I would like to present you our later news, what is topology migration, but first, I would like to start what is NIX.cz now, we are located in Prague, we have four locations, we are running now virtually dual star LD infrastructure. We have 7 Cisco 5500 access switches and two Nexus switches. Of course we used to DWDM system passive. Nowadays, we have 82 members, 21 customers, 6 TLDs, 164 connected ports, which is 650 free gigabits of connected capacity. We have big traffic of about ?? peak traffic of about 155 gigs and IPv6 is not so well, it's only 351 MEGS, peak.

OK. So back to our migration. Our old topology was a ring topology so we run seven access switches, Cisco, 6500, which were connected through a ?? time 10 GE, which was through DWDM or LR or SR. Of course we use spending tree for loop free hopatology, and you can see that these topology is not so effective because if, for example, one customer wanted to communicate from location 1 to location 4, his traffic has to go through location number 2, and it is not so efficient. So our motivation for migration was that the core links ports was used for 42 percent of all ports, so we have only 48 percent of ports for our customers. We have problems to connect next locations or switches. It was a little bit more difficult. We used spanning tree so the blocked lines wasn't used, but you have to spread many in this links because of failover and, so on. We have a problem with the port channel hash distribution algorithm because if you use Cisco, you need to build port channel with two, four or eight members.

So what were our needs and solutions:

So, we want to build a new network, intelligent network with no out of band management. We wanted to build active and virtualised topology with best performance. We want to have backward compatibility. We wanted to implement it with minimum of traffic outage. And we want to have better expansion possibility for future. So we decided to implementation new core switches Nexus and we moved to virtualised dual?star topology.

So here it is on new topology. You can see we have our seven access switches and two new Nexus core switches and I would like to explain now what virtualised dual?star topology is.

So if we start ?? we have single star topology with one core switch but when the one wore switch fails, it's everything broken, so you move to second dual?star topology, you have active and stand by core switches and if you use virtualisation you can have both active active core switches. So from the way of the access switches, the port channel is like one link, so the access switch see the port channel like one link. So we have 40 gig to one core switch and 40 gig to a second switch. And together, is it one core channelling to ?? with 80 gig.

But where is this the spanning tree? It's still there because we like it because it's green technology, the tree is inside, and yeah, it's still there, but virtualised, it's loop free topology so the spanning tree has nothing to do and waiting for VP 6 failed or something or if you want to collect other switches to access switches.

So, our upgrade procedure was about half a year of pre arrangements so thank you AMS?IX and LINX for information exchange and staff exchange. We started core switch tender with Arista, Brocade, Cisco, Extreme, HP, Juniper, we do lab testings with broke aid Cisco, force 10 and HP, we need housing optimisation, Nexus is huge. The whole reconfigurations and reconnections took about three nights and the traffic outage was only for STP topology change. So no BGP session went down through whole upgrade. We have two hardware problems with X 2 and one Nexus line card so you can see no hardware is perfect. So again, I would, from here, I would like to thank all for great cooperation.

Our future plans: Of course, our members are growing so we need to readdress our peering segment, IPv4 segment from /24 to /22. OK, we need to build up our network, we want to develop service, we want to test new IPv4, IPv6 features, of cost we want to next new hardware like 40 gig and 100 gig interface and new features. So thank you very much. Questions?

CHAIR: Thank you.

(Applause).

AUDIENCE SPEAKER: Nick Hilliard from INEX. Given that you have a virtualised core, why do you still use spanning tree? It looks to me like you don't need to use it in this situation. And given that ??

SPEAKER: You don't need spanning tree in the virtualised core, but if you want to connect some switch to two access switches you need still spanning tree because these access switches doesn't support virtual port channel.

NICK HILLIARD: OK. Is this when you are connecting a customer switch to your two access switches, or what?

PETR JIRAN: For example, our technologies services or TLDs. So they are not directly 4500.

NICK HILLIARD: OK. It's purely just for edge loop detections?

PETR JIRAN: Yes, yes.

NICK HILLIARD: Thank you.

CHAIR: Any other questions? No, thank you again. Next up is Milton from PTT.

MILTON KAORU KASHIWAKURA: Good morning, my name is Milton Kaoru Kashiwakura. I am the project director at NIC.br. It is a Brazilian organisation, is a private not?for?profit organisation. Of the Brazilian, I represent Internet exchange points.

PTT is acronym for the text point in Portuguese. Is the name of our project. PTT metro provides the necessary infrastructure for the direct interconnection between ASs operating in a metropolitan area.

PTT metro has 14 Internet Exchange points spread around the country. Brazilian country is large country. We have 190 million inhabitants in an area, compared to 80% of from Europe. These two red arrows shows points to the two new cities that we started the Internet Exchange points in Brazil. Here is the PTT metro of saw poll low, the biggest infrastructure in Brazil. Its infrastructures is .... at building which 20 distributed sites connected by optic fibre. We are using in some cases DW image to connect to them. The total Internet traffic exchange in PTT metro of so you Paul low is around 33 gigabits per second, which is 140 participants.

Is a table that shows the number of participants per location. Here, you can see that in Sao Paola has the majority of the participants in Brazil because city concentrated the majority of international traffic to the United States.

Here, this other table, which the traffic is per location, so you /SO*UP is 33 gigabits per second and the has only 2, 26 gigabits per second, less than 10. This is kind of that we have. This is the aggregate traffic since 2007, and now, the aggregate traffic in Brazil has around 40 gig bits per second. This is the same graph but aggregated, exchange traffic per months. We have 6,000 terabits per month.

Here, we see the main quantity of the PTT metro, the is quality improvement, reduce cost, less latency, it provides resiliency, and in the organise the infrastructure in Brazil.

So, PTT metro, number of localities: 14. Number of participants: 200 AS. Total aggregate traffic in Brazil: 40 gig bits per second, or 6,000 terabits per month. And PTT.BR is the site. Thank you very much.

(Applause)

CHAIR: Any questions? No. Thank you again. Next up is Laurent.

LAURENT GYDE: Hi everybody. So I wanted to bring to you some news from SFINX which is a French Internet Exchange point in Paris, so we will have outlines then one first information about interconnection between SFINX and France IX and information regarding new equipment that we are currently bringing to SFINX

It is a kind of historical IXP, operated by Renator which is a French network. We have two POPs in Paris which are in data centres, Interaction 1 and Telehouse 2. Infrastructure between those two POPs is made of 10 GE circuits, running on DWDM infrastructure. And we have about 110 peers and we also host ?? we also host the DNS Anycast replicates and such things.

So about SFINX France IX interconnection. France IX is a new exchange point in Paris which has been set up current 2010. And they already have about, I think more than 40 members and we have agreed to make an interconnection between those two IXPs. The interconnection is made of two 10 GE circuits and the goal is to make possible for SFINX and France IX members to peer with members of the other IXP, for SFINX it brings more than 45 percent peering opportunities.

So on the bottom you see SFINX and on top France IX and we are co?located in two data centres, so Telehouse and Interaction, as you see currently we propagate France IX into SFINX and of course second step will be to propagate into France IX and so it allows members from one site to peer with members of the other site.

About the new equipment. So, just briefly, the topology of SFINX, we are, once again, in two POPs and in fact we have two optical fibre connections from one POP to the other. On those two fibres we run DWDM and we have optical circuits and on that we build two times 10 gig circuit, so that those two POPs are well interconnected.

Currently we have Cisco equipment, as you see. And CNF and DWDM.

So time?line. We run call for tender in summer 2010, and the choice, as you see, was Brocade equipment. The deployment is currently running and we think it will be ready for late 2010/early 2011. And just to brief summary of capacity upgrade which will be brought by this upgrade. As you see we are very interested in having more 10 gig interfaces and we also are very ?? we also have potentially interested in 100 gig interfaces, at the first time not for connecting members but for interconnecting the POPs.

Thank you. (Applause)

CHAIR: Thank you. Any ?? we have a question.
You only get one prize, though.

AUDIENCE SPEAKER: I have two questions for you. Concerning the interconnection between SFINX and France IX currentsly members can reach France IX members through the France IX VLAN but you cannot do it your posit way, people connected to the France IX cannot get to SFINX so is it something scheduled or not?

My second question, you didn't talk about that but I know SFINX was planning to have some optical exchange Marseille, is it still going on or is it a dead project.

LAURENT GYDE: It is planned that SFINX will be propagated in France IX infrastructure, anyway at this time members of each side can set up peering but it is planned that both VLANs are run on both infrastructures. I think it answers. And then about MORAN, which is an optical exchange point that we plan to build in Marseille, it is still ongoing, it is still a project that we want to be, that we want to see. The fact is, the world today was to talk about what is new, what is happened, new, so more enis still a project. I hope in next meeting we will be able to talk to you more about that. At this time we are still going on the way.

AUDIENCE SPEAKER: Thank you.

CHAIR: Any more questions?

MARTIN LEVEY: Martin Levey, Hurricane Electric. Do you need to sign a SFINX contract if you connect through France IX?

LAURENT GYDE: You don't. The goal was to make it very simple. So contracts for either IXPs are built so that members can connect to the others. OK, so you don't have to sign another contract. You sign only the contract with the IXP you are connected to, and second thing is we don't charge specifically, people who use this interconnection. It's provided as a new service for everyone who wants.

MARTIN: SO a collary answer is if you connect directly to SFINX do you have to sign your contract?

LAURENT GYDE: Yes.

CHAIR: Any further questions? No. Right. Next up is Sergi from Ukraine.

Sergi: From Ukraine Internet Exchange, Internet association of Ukraine, we celebrate 10th anniversary this year, but have you ever listen about UA?IX, could you raise your hands up? No so many, but I hope, today, everything changes.

Look at this statistics. It's not fresh because I make this two days ago, but you can see Ukraine at sixth position in top 12 list, and Italy with MIX on number 12 here. I am sure we will be number five in the next few weeks, maybe early, and I would like to share the secret how to be so ?? how to get so much traffic. First secret is continuous development. Aggregated bandwidth grow three times per year, so we have nine times per two years and need to upgrade all infrastructure every two?and?a?half years, we need to multiply 10 times speed of our infrastructure.

So, you can see detail statistics about quantity of different ports, connection of our members. First interesting thing was in 2005, when we deployed new cast list 350060 G switches and provide equal cost for 10, 100 end gigabit Internet, less than one year half of all members speed from 100 to 1 gig because it's cheap and they have faster connection. Next time, we change our vendor from Cisco to extreme in 2007, to deploy 10 GE. First year was not so successful, only four 10 GE customers port but in 2009, when new equipment was very cheap, 10 G port was available, it's extreme for 650. It's small device, one unit with 24, 10 G ports. You can see a lot of 10 G customers and, now, we have more 10 G ports than other type of connections. And another surprise is our price. Compare, you will see our 10 G price cheaper than 100 megabits in any European IX. It's possible and it's also profit in our nonprofit organisation, we have enough money to set meetings like this but in our country, with the same quantity of people attended. And we have enough money for continuous development. And our country have great benefit and a lot of ISP could provide Ukraniain traffic for free. The same extreme networks you know use at LINX, at C IXP in Geneva and Helsinki, all prices calculated two days ago with bloom berg calculator.

Another important thing is rules. We have unique rules because our key rule is everything for everyone. Private peering prohibited. If you join UA?IX you go all 100 percent of all routers. We have very strong security on all levels and does not use software at all servers. We can still continue use very old Cisco routers to do this and have up time several years without any interrupt.

Our members society open for all. We have interest in at the moment to obtain 100 percent of all networks, 50, half of them, of current members, should vote, if they want to peer with newcomer or not. It's our policy to join. It was a reason why RIPE does not give us a P v6 for two years. But now the situation resolved and we have big benefit because no need to negotiate agreements with every potential peering partner every day and more traffic going we are traffic exchange because all networks available for everyone.

More exclusive features at our Internet Exchange: First, as I already said, we have one monthly fee, one ? 180 euro. It may be 10, 100, 10 gig maybe 100, I don't know yet. We have free of charge transition from low speed to faster speed. We use a cheap SFP plus 10 GE model start from January 2009 and this year completely forbidden old expensive XFP, XFP plus have additional advantage if equipment of member located nearer than 10 metres we can use 10 G it costs near 100 euro for make 10 GE connection, and we have space for data centre to allocate members' equipment on small distance.

Right now, we are looking for 100 GE but at the moment only 5 vendors could produce it and price is not very good for us. So I think we will provide this technology in next year as soon as will be normal pricing for this equipment.

And last, my picture is topology of our Internet Exchange. You can see a lot of interesting information here. We do not use any spanning tree, it's not good technology. As spanning tree gone, we have no any problem. We use EAPS described in RFC 3619 in case of failure, it changed path less than 50 milliseconds, but we have no failures. We use multi?chassis slots, it's effectively aggregate current eight 10 GE links between different POPs and you can see statistics in corner, SFP plus models, 53, but all XFP only 16.

So, I think it has been interesting for you. If you have any questions?

CHAIR: Thank you, Sergi. Do we have any questions?

AUDIENCE SPEAKER: Yes. It's me again. Just one question. On your exchange point it's mandatory to peer with everybody else, do you have local encumbent, like telecom operating peering with everybody or just small players and new players in Ukraine?

Sergery: Only one Internet provider in Ukraine, does not connect to us because he does not agree with our agreement. He tell we should sign his agreement with him, but we have one agreement for everyone.

Mike from Google: I guess a similar question: Do you think you would get more members and more traffic if your rules were, perhaps, looser, and I saw there a lot of rules about you have to announce every network, you have to peer with every person?

Sergi: I think our rules give us more traffic and Google also available at our exchange via our member top net at the moment.

AUDIENCE SPEAKER: From euro transit. Just a question, I checked your website and saw that the applications only available in Russian language. Are you going to change that?

SERGI: Yes, it will be translated in English, in Ukraine and because our exchange in Ukraine but language of that application, Russian, it's wrong.

(Applause).

AUDIENCE SPEAKER: Google. Can you please tell whether you have done any research on what sort of traffic like basic fingerprinting, people sending through your exchange?

SERGI: We cannot look inside of our members' traffic, but it's the same as all Internet traffic at all, because we ?? this traffic, we have more than half of all national traffic, including all foreign IP transit.

AUDIENCE SPEAKER: Thank you.

CHAIR: One more.

AUDIENCE SPEAKER: I was a bit surprised that you forbid private interconnect, I mean how do you control that and why do you do that?

SERGI: It's very easy because most of Internet provider have looking glass but we control them only in case of somebody from members send us e?mail and complain on that situation, because our main goal to make Internet connection better for everyone in our country. If something going wrong we take action.

CHAIR: OK. No more questions? Thank you, Sergi.

(Applause)

OK, next up is our new 60?second stand up and tell the community about your exchange, a few people have slides. Could all those who want to talk stand up. We have got three. We have got ?? so that is five people in total. And is Frank here? Do you want to come down so we can get to this fairly quickly.

Job, do you want to start off, beginning with the alphabet. Job wants to discuss a bit more about the NaMeX interconnect, so we said he can have a couple of minutes can we try to keep it to a 60 seconds, this is who we are and how to find us and take it from there.

Job: For those who can't see my badge, AMS?IX I use this to talk about our 100 gig deployment that we are planning. Brocade, we use Brocade equipment, already has 100 gig blades ready to go, better of course. And our plans is we have purchased, now, 100 gig chassises, traffic analyser/generator so we can use that equipment to use in our own lab. So plans now for are for January to test all the Brocade blades inside our own lab, then in March we will have a pilot with the customer, so pre production, and in May we expect to sell this as a commercial service. So by the next RIPE meeting I hope we can share some experience with it.

Pricing, roughly €11,000 per month. I am sorry cannot meet the pricing of the Ukrainian Internet Exchange it, I wish we could. That was the 30 seconds.

Now I want to talk a little bit more elaborate on our partnership, not specifically the one with NaMeX, you heard a lot about it, but more in general. We have a partnership agreement in which we allow multiple customers to share a single link, a single port on AMS?IX. Background being we run an MPLS VPLS networks, previously we were unable to to have multiple MAC addresses on a single port, well we could but if there were different customers and they violated loops or whatever, we had to close down the full port. With the new structure we can allow multiple max and identify that per Mac address on our switching platform. We are already used with remote routers, we have customers correcting their router from the west coast of US to us so we can deal well that as well. Those two combined we figured ?? there has been demand already for years for people, can we share an infrastructure to get to your port because it's basely too expensive if we have a number of customers from specific area, we would like to share a single link and we said that is possible but you need to put the switch in between. That is one of the projects that has been going on for a long time. But of course, it's much nicer to allow people that are in a same infrastructure to share a link to AMS?IX, use that single port but break out inside our network into separate VLAN. So, the new programme that we have allows any party that has been infrastructure, whether it's an exchange, a carrier or simply an ISP who runs multiple ASs on behalf of someone else, to connect with AMS?IX, a good example is NIXes is already up and running, Dutch local exchange, they have their switch next to us so there is no infrastructure involved to get to us, but they have multiple customers on the NL IX infrastructure that want to peer one?on?one with AMS?IX members. It also allows them to connect to the route server that we have, so they are full members, so the members of another exchange will become a full AMS?IX member, can peer with anyone on the exchange as long as they are able to close bilateral agreements, it's just like any regular customer.

Of course, there is benefits on both sides, otherwise we wouldn't do it. The benefit for us is that we see more customers from a specific region coming in on a single infrastructure, cost beneficial. It will bring a number of customers in one go, so that is good for the existing membership, they see more peers and more reasons to come to AMS?IX and peer with those parties that are remote. For a carrier or another exchange it's very beneficial to share the cost of the infrastructure to get to Amsterdam. I want to make it very clear, AMS?IX we have talked to our members, we will not allow us to extend our network outside Amsterdam but of course, like any other customer, they come with their infrastructure to come to us. Same story here, whether it's a local party in the Netherlands that is already there or some joint effort from, in the case of NaMeX, from Rome, who buy their infrastructure together to get to Amsterdam, they share the price of getting to us and we have an interesting pricing truck tour for them. Beneficial for both parties, otherwise we wouldn't do it. It's a very controlled way, much better than what we have tried and seen in the past where you depend on each other.

I think that sort of covers it. So we already have four existing partners, two ?? someone an ISP, one is carrier/ISP, one is another Internet Exchange and NaMeX is to hook up by the 1st of December, we are praying for. I hope that gives you a little bit more insight on why we do it and how we do it and I am open for all questions.

CHAIR: Can I ask people to be brief.

AUDIENCE SPEAKER: James Limelight. Does that mean the remote member signs the AMS?IX contract or does someone sign it in proxy?

SPEAKER: The partner signs in proxy. We charge to the ?? to the partner, not to the end user directly. But the end user, so the network becomes a full AMS?IX member with voting right and everything and they have their own sFlow stats and they are in my AMS?IX as well so they have the full member benefits, it's just a joint pricing and charging model.

AUDIENCE SPEAKER: Nick Hilliard. If you are selling 100 gig at the edge what are you doing for the core and in particular how are you integrating DWDM core with your 100 gig edge requirements?

Job: The 10 gig port connects to a single edge location, just like if you were a regular customer

NICK HILLIARD: But this is for 100 gig edge ports.

JOB: 10 gig

NICK HILLIARD: 100.

Job: Sorry you are back to the first. I thought you were talking about the partnership model. The 100 gig is customer connections only, it's not inter switched connections. As we have eight switches in the core, and as load balancing we keep to multiple 10 gigs to get to the 8 cores

NICK HILLIARD: That is not going to scale indefinitely, are you going to have to move to 40 for your individual bearers or what is the long?term plan here?

Hank: AMS?IX. We will start with 100 gig towards customers keep 10 gigs in the core up to around 2012 when it makes more economic sense to move the core also to 100 gig. Before that, we don't expect larger than 10 gig streams that would offer any part of a LAC.

NICK HILLIARD: Thanks very much.

CHAIR: OK thank you, everyone. The list we have got is DE?CIX, Equinix, France IX, LON?AP and NetNod, have we missed any others that want to present?

AUDIENCE SPEAKER: Vienna.

CHAIR: So next up is DE?CIX, followed by Equinix and then France IX.

SPEAKER: My name is Frank from DE?CIX in Frankfurt. We run the German Internet Exchange. Well, what are the latest news, we have recently started to replace our first ten Terascale boxes with Exascales, the idea is to limit the number of edge switches by, we have thrown additional MLX 32s at the core, added in new Telco so that brings up the list of DE?CIX enabled sites to around 13. Around 50% of our customers are v6 enabled so our goal is clearly to have 100 percent by the end of the year 2011, so watch out for more news about our v6 related activities. We have hired three more staff members to support our web developer and engineer, so that is 18, well five years ago when I started it was the three of us so we have seen a bit of growth there and we have recently celebrated our 15?year anniversary party and we had a very first customer summit meeting where we had around 300 guests, so that was a very good event.

So what is ?? so in 2010, we are planning on connecting around 65 newspaper networks, so we are pretty much on track with that. We have one?stop?shop for many of our customers. We also are going to add additional services, so we will make a deployment that should be live in Q 1 next year, planning on ex panning our consulting business. I don't want to take any more time here, just to give you a quick idea what it looked like 15 years ago, we were in the back room of an old postal office, so ?? well, the other pictures actually show some of our newer deployments. That is it from my side, if you want to talk to us I am here with my colleague Yvonne and Andreiious, if you want to talk to us please feel free and I am sorry our ports are not 1,800 a month either, let's try to figure out the cost from Backbone all the way from Frankfurt to key he have. That should be fun as well.

John from Equinix: Hi, John Taylor from Equinix, and I promise to run it like a freight train. So mercifully brief. I thought I would start with a poem, right? So an Equinix update. What could be better, a weekend in a prison or increment weather? Our news is commercial. It's quite nasty. We will deliver it quickie, before you go pasty. I have got a few minutes. One of them is gone, I better get started before the big gong.

We have had a great year integrating switch and data, traffic is up to the right and members, partner ports are up into the right, so peering is is a alive and kick inside Equinix. We have got a great new addition to staff, Vincent rice has joined us and we are pleased to have him on board, part of business development. Take the time to get in touch with him, he will have a lot to say about what we do inside interconnection and peering inside Equinix going forward. We have been working a lot in Europe on standardising our exchange platforms, rolling out new multi?lateral peering exchange roll?out, so that they are consistent with our platforms in other cities. Expanding in Geneva and introducing service level agreements everywhere in the US. We are funding a Newportal development for the IX customers in 2011 that should say, not '10, and it should be ready for customer use in the second half of the year.

Looking forward, a quick map that you can consult about where Equinix is operating exchanges both Internet, Carthy Ethernet and global roaming exchanges around the world, with access to AMS operated GRS in Amsterdam. In Zurich we have got a new redoesn't route between our buildings and we have changed our interconnection policy so cross connects between buildings are now priced the same as inside buildings. That will bring some relief for some people. The only thing we are not supporting is OM 3 capable multi?mode fibre, we just encourage to you use single mode for 10 gig links between buildings going forward. At CIXP we will be adding a third node, sometime in the first quarter of 2011, it's taken us a little while but should have all the capabilities of Equinix exchange, which we think will add a lot of value to that exchange point. AP quick update on Paris. I am doing three exchanges so maybe cut me a little ?? almost done ?? we added our 100th member in October, around 110 now. Focusing on two facets, Telehouse and Equinix Paris 2 and 3, touching about 70% of the ASNs in Paris from those two locations. And we are very aggressively bringing that exchange on?line. Finally, we are also looking at inviting partners who are in our facilities to help bring customers to our exchanges although we do operate a one link one Mac address policy.

So, a final word about the carrier ethernet exchange: It's coming to 19 markets around the world. We have got ten rolled out now and we'd love to talk to you about how that may be able to do things like IP over layer 2 in certain markets could be very, very interesting. If you want to contact us there is four of us here, today, my invest Vincent rice Remco and Erik.

CHAIR: Any questions? Great. Next up is France IX, Martin.

MARTIN: I am doing this presentation for Frank who couldn't be here. I am on the board and let's get right to the meat.

New IX in France, based in Paris, spread around, association of networks, two other POPs extension toss other POPs, I think that this is probably the meat slide. Coordination of the Paris peering system, public and private peering, everything you would expect in an IX. All the ports that you can eat. Operational June 2010, 74 members right now, traffic is about 16 gigabits and growing. For more information contact me afterwards. I ran some quick numbers on the costs. I don't have a cost slide here if you go to the website the costs are up and it looks like the barrier to entry on 10 gig port is about 650 megabits of traffic which is pretty low. If you want to see me afterwards, let me know. Thank you very much.

(Applause)

CHAIR: Thank you Martin, under 60 seconds. That is the idea of this slot. It's a quick stand?up and pitch slot because if we have got 20 people. We can't do three minutes each, and that was the old format and everybody hated it. So moving quickly on it's LONAP.

SPEAKER: Here with Andy from RIPE, we are an exchange in London, we have 100 members, so good to see so many of our members here, we hope to see some new members from here as well.

CHAIR: You have have competition. Under 20 seconds that was.

SPEAKER: Actually I will skip the slides because 10 ?? 60 seconds is nothing. If you don't know us we are NetNod and based in Sweden, you might know about the fact we are located in these underground bunkers. What is interesting maybe most interesting things about us is the high amount of traffic per peer, 66 peers and about 250 gig max so I won't do any comparisons. One piece of important news for us is we lowered our prices for 2011. But with 20,000 K per 10 gig ports and 10,000 K ?? sorry Swedish K per 10 gig ?? 1 gig ports, sorry, and that is across the board on all our locations, if you want to know exactly what that means for you let us know. Of course, that will kick in for existing members but also for new members. OK, I am trying to it be ??

Another interesting piece of information, route server installation, before to activate before end?of?year in Stockholm and move onto the others. It will be present on all different V LANses with different RIBs and we have chosen after much evaluation to use BUD. We have just launched our new website, we thought it might be be good to have a website that works and looks decent. Another piece of information ?? I am not sure I am proud of it, but we are now on Twitter. And we have a NetNod coming up in March 2011. And we think it will be an interesting one.

If you are interested in talking to us I am here and Kurtis is here and two of our tallest colleagues are here, they are easy to spot because of their height. That was it from me. Thank you.

(Applause)

CHAIR: Thank you. I am afraid under two minutes but good attempt. So, next up is Martin from VIX, last call for any other IX that wants to present. No.

MARTIN: I think I am going to beat. I am working for the Vienna Internet Exchange. I have got no slides either but this wonderful piece of information it's all about the exchange point. If you want to have grab one, I am sitting in front of the rear exit, you can't miss me. If you have any questions, just ask me, thanks.

(Applause)

CHAIR: 22 seconds, well done. Now we are going to start the second session of the meeting. So do you want to take over ?? route server RFC updates, a fascinating subject.

ELISA JASINKA: Tell low, I am with Limelight Networks for everyone who did not realise that fact yet. But I still did not stop talking about route servers so that is why I am here. We all know what route server is, many Internet Exchanges use them, right, so everyone has an idea what they are for and what they do. The problem that we run into at some point looking at different implementation, playing around with different installations, etc., etc., that no one really ever defined what a route server is supposed to do and what it is. So at some point Cisco first started looking at implementing a proper route server functionality into their boxes. Some people use Cisco as a route server but they don't completely fulfil all the necessary needs that a route server actually should have. So, they are now implementing a complete route server implementation into the boxes, too, but they didn't really know how to go about that and what to do about it. So we sat down are a whole bunch of people, Nicola from AMS?IX and Robert from Cisco and wrote down a document on that. The problem with the implementation so far were, well, it was kind of going into different directions and no one really knew what kind of things are we supposed to do, are we supposed to provide things like policy or not, some people Ukrainian IX we heard force everyone to peer with everyone so no policy or filtering is needed on a route server, other people don't do that so you actually want to have the possibility on a route server to still define do I want to exchange my prefixes with everyone on it or not. There are other things like is my route server supposed to use multiple ribs or not, is put its number into the AS path or not which is what happens if you just use a normal router as a route server right now. A normal BGP implementation would not pass on any ? to another BGP peer so they are useless in that case. And other attributes that BGP would not even a fact that a route server should pass on transparently because this node in the middle not actually forwarding any traffic but just passing on your information and connecting the members to updates, and telling you where you can reach everyone.

So, we submitted a draft together with mentioned people, defining on what route server is supposed to do. And one of the key points in there is basically how does it differ from a normal BGP implementation. Those are a couple of points, it has to accept updates, it should not change, it does not have to change the next hop as it does not participate in that traffic, you get an update and say this is where you can reach that guy. It should not enter itself into the AS path because we all know that the best path selection is based on all of those values, changing all of those in those cases may alter the resulting, the result of it, and may alter the prefix that in the end has been selected for you. So we all don't want to have that. METS are meant for the peer to evaluate them so if the route server drops the MET and doesn't pass it on they are useless, so it should pass it on. And other values, other BGP attributes that are meant for the action ?? the actual BGP peer to process and not for a route server that doesn't do anything with it. So this is the core piece of information for the implementation part. And we continue on a discussion because if you want to implement filtering you run to the issue of prefix hiding. I believe Nick Hilliard presented this at the last or previous ?? somewhere in Amsterdam ?? RIPE meeting on what the issue of prefix hiding is. I am going to go over this really quickly so you have an idea.

So you have a route server, AS 1, 2, 3 connected to it. They all announce prefix P is the same. The route server performs the best path selection and picks P from AS 1, the blue one, the best one and it will go ahead and forward that one to the other ASs connected to that route server. If AS 5 decides that it doesn't want to have any prefixes at all from AS 1, it would not get it at all; it would basely, because P from AS 1 is the one that has been selected and normal BGP implementation doesn't say if this is not working I am going to forward to the next one. So one way to avoid that and this is what most route server implementation do these days, and have different ways of implementing this, is use multiple RIBs where you basely perform your best path selection multiple times so you have a second RIB in which from the very beginning on you do not insert a prefixes that come in via AS 1, this would be the special RIB for AS 5, and in this case it would perform ?? it's separate best path selection and then forward prefix P from AS 2 to that AS 5 that doesn't want to peer with AS 1 apparently for whatever reason.

This is the prefix hiding thing.

Then continuing with the draft we have a section on operational issues, mentioning things like scaling problems; well, if you see that, we just talked about it, you perform many, many, many best paths selections have many, many RIBs which at some point may run into scaling issues depending on how the impolicemen takeses look like, only use special RIBs for people that would use a filter but that needs pre computation and awareness of who would be using filtering or who wouldn't. Other route server implementations just by default use a separate RIB for everyone which means they perform a whole bunch of best path selections.

Other useful hints as well as for route server operators as well as for route server users, prefix lesion, people have max prefix limits or other prefix filtering on their route servers from the IX operator side then from the user side you have things like your route server checking on whether the update you receive actually comes from the peer that your session is set up with, on Cisco it's called enforce first AS, this is not going to happen, or this is not going to work togethered to route server because it's not supposed to insert its action number so things like that to be disabled when connecting to an route server. What we did with the document, we went to IETF and wrote this draft and said we have something about route servers. We want to the global operations Working Group. It was submitted as an informational draft and we presented it in Maastricht at IETF in July this year. We received a loss of positive feedback, people were very happy and it looks like this is going somewhere, but and I have explained on one of the first slides, someone came up and said, you are changing BGP here aren't you? We are like, maybe. So the problem is that information RFC in an operational Working Group is not supposed to change any standard, right, and BGP is kind of a important standard for all of us here. So, we had to change a couple of things around and decided to move the document to an actual standard document, we presented it again, added a couple of things, a couple of information that people found interesting, missing, all from that feedback that we have received after the first presentation. The new version came up a couple of weeks ago and in Bejing last week we presented it again in the Working Group and IDR Working Group, the IDR is the one dealing with everything related to BGP. So since we admit to it we can changing BGP they had an interest in that, too. And we are continuing the process to have a document that other vendors can look at and go about how to implement route servers in actual hardware. If you ever thought about participating in IETF, this would be the time now to support this. And this is it. Questions?

(Applause)

CHAIR: Thank you. So, questions? T?shirts. Prizes. Ah.

AUDIENCE SPEAKER: Google. Why do you think that the removal of system of route server E is mandatory requirement.

ELISA JASINKA: I am not saying to remove the route server system at all. AS number, OK. Well it could ?? it could alter the best path selection right and talk ?? and looking at it from ?? looking at it from a principle point of view it's not participating in the traffic, your traffic will not traverse that actual AS from the route server; it will go directly to the peer that announced the update. So, you know, the AS path being the path of which AS your traffic goes through on a path so strictly speaking it's not supposed to be in there.

AUDIENCE SPEAKER: I honestly disagree. Can you get back to slide number 2.

ELISA JASINKA: I hope so.

AUDIENCE SPEAKER: Here it says it must ?? should not ?? sorry, I read it wrong.

ELISA JASINKA: We thought about this.

CHAIR: I think there is a special prize for that question.

ELISA JASINKA: Can I comment on this, I know what you mean, for it to work, it does not have to do it but from a strict point of view, it would be better if it would so. The wording is carefully chosen.

AUDIENCE SPEAKER: Thank you.

MARTIN: Hurricane Electric. I had to stand up and ask the exact opposite question of that, with respect. As an operator, I appreciate enormously the amount of work that is being put on to the route server, both from a document and from a code basis, and as an operator, I never want to see the AS number of that resultant work, ever, anywhere, and having certain IXes not in this room that do inject their AS number and certain ?? and other IXs that don't, I can say from an operational point of view and consistency point of view we absolutely don't ever want to see this AS number or an AS number of a route server, ever, because it is an anomaly of a policy, not of routing.

ELISA JASINKA: Precisely. And why they do this is because they use hardware that is not capable of actually not inserting it, hence writing down a document and describing what those things are supposed to do. All those people using Cisco will be able to turn on route server feature and not do it any more.

DANIEL KARRENBERG: RIPE NCC, I want a T?shirt. I am also a little bit religious about minimal ?? being minimalistic in RFCs, so I wanted to go to this slide and ask for the motivation of the word "process" there?

ELISA JASINKA: Example, OK? One way to use filtering on a route server is tag your prefixes with communities, those communities are then actually meant for the route server itself and not to be passed on to other people. In this case, if you receive a prefix with such a community, the route server actually has to process this community, do something with it and not pass it on to other people.

DANIEL KARRENBERG: Yes. So, why ??

ELISA JASINKA: This is in detail. It's ?? you know. It's in there in detail. But the motivation for the word "process" are cases like that where something is actually meant for the route server itself and it has to do something with it because that would be processing it.

DANIEL KARRENBERG: Why don't you want that if it doesn't affect the output?

ELISA JASINKA: Why don't I want that if it doesn't ??

DANIEL KARRENBERG: Yes, modified process or remove communities ??

ELISA JASINKA: May affect the output, right?

DANIEL KARRENBERG: OK. Now I get it. Thank you.

CHAIR: OK you can have a T?shirt afterwards.

ELISA JASINKA: I mean ??

CHAIR: Any other questions? Thank you very much.

(Applause)

Next up is Nick, he has got two talks in one, hopefully.

NICK HILLIARD: Good morning. The surprise is not that I have two talks in one but that I have actually been invited up here after ?? at all, after my last performance on the podium in the European Internet Exchange Working Group last time around.

Anyway, I want to talk about our exchange architecture, which is a changed rather dramatically. And I am really glad actually to see this UA?IX, Sergi's Internet Exchanges is scaling the sort of systems that we are looking at to a really big size because that is really comforting for me.

Just very briefly, until 2009 we had dual LAN architecture, Cisco 6,500s and we had two locations, so that was four switches. And we decided in 2009 to open up a couple of extra points of presence. But it became clearly fairly quickly that Cisco 6,500 kit was too expensive and we weren't going to be able to afford it. We were beginning to think that, well, you know, Cisco 6,500s, they are really good for certain things and really reliable, hardware is really nice from reliability point of view, but 10 gig on a 6,500 is not very good, really. You have got two cards, you have got a 6704 which is a contended port, low density it, high cost and it uses XENPAK. And on the other hand 6708 which is more or less twice the price and it uses X 2, which was going to cause us problems, we didn't want to go down that route. We went through a wish list, created a wish list and we had a beauty contest, it would have been nicer to have performed to actual live testing before the kit but unfortunately we didn't have the time. So we bought a whole bunch more switches. These are switches in case anyone hasn't seen one before.

So that was all fine and we sat and thought about things and we thought, we started getting rather large power bills from one of our co?location providers, 29 cents per kilowatt hour, and that peaked at a certain stage, but looking at it from more simple point of view, 29 euro cents per kilowatt hour, and when you are talking about you know a 7 ?? a 6,500, we worked out that they were pulling, you know, between 800 and 1,200 watts per year and that was actually ending us up with very significant power bills. We also had a support issue and that is is the support for the machines was based on a percentage of the initial cap ex, which was relatively high, and between those two things, our five year total cost of ownership was actually turning out to be rather large amounts of money. And on the third problem was that the cost of a 10 gig port was really excessive, we felt, so we got all of these figures, banged them into a spreadsheet, built up a new model of how much ?? what would happen if we were to just from a financial point of view, if we were to take the Brocade turbo irons and 65s that we were using and replace our entire infrastructure with those and it turned out that, actually, even if we didn't re?sell the 6,500s we would end up with break even at 5 years and if we resold the Cisco 6,500 kit we would actually end out ahead after five years so it was a fairly clear financial case. Just as an observation, it turns out that the cost per port of a Brocade turbo iron 10 gig port is about ten times ?? sorry, 10 percent, one?tenth of the cost of a 6,500 port. Well that was fine. But was it actually going to work? So, this was our due diligence.

We created a list of features which we felt were pretty important to us. And we did a fairly thorough analysis. Of course you can never catch everything when you have a butte contest, there are always minor things here and there. Subsequent to buying the kit, the London Internet Exchange were very, very helpful and they lent us or allowed us to use their Ixia testing gear and we set up a couple of tests, this is a result of one of the tests. This is the RFC 2544 performance testing and it indicated that if you got 10 gig input on one side of the switch and 10 on the other and you configure up a snake configuration, that under lab circumstances the switch will actually operate at wire speed which was good. Now, wire speed in a lab situation is of course not wire speed in a real life situation. But it was a good indication to start off with.

In terms of overall feature compatibility, the F,s don't do ether type filtering. I understand that is a software rather than hardware problem. The turbo irons currently don't support sFlow 5 but a previous version of sFlow and there are plans to get sFlow 5 in. Neither of them support RA guard but I think there are one or two switches out there which do so that is not going to be a huge problem.

It has been noted that the, in several forums that the switches are fairly feature limited and that is OK, we don't need a huge amount of features. But the interesting thing for us was the change in switch architecture which was a change from a store and four based to a cut through. So just very quickly going through this. A store and forward system, you get your packet in and you stick it into a buffer and then you look up where the packet destination is, and then you send it out again. But on a cut through switch as soon as you receive the destination Mac address, you can start passing that to directly through to the other end and this has some interesting architectural consequences. The first is that you require an awful lot less buffer space, so a sort of a not very interesting comparison, the 6704 cards have 16 MEGS per port buffer space. This is considered to be not very much and they are considered to be very under provisioned in most situations. On the other hand, a turbo iron ?? and indeed not just that but pretty much all of the other 10 gig switches which are based on either the broad come or the fulcrum chip sets and extreme S 650s, the Arista 24 port 10 gig boxes, and not the Nexus 5,000, those have slightly more complicated buffering systems but all the others pretty much have two MEGS shared between the 24 ports. We received a recommendation from the manufacturer not to mix port speeds on the same box.

I want to explain very briefly why:

So you have your switch fabric here. Do I have a pointer? No, I don't. And you have your ingress port and your egress port. Of course that isn't usually the way it works usually you have a couple of ingress ports and you know might have traffic coming in from several ports and going out on the one port. So if you look at a sort of a typical constrained traffic profile, this is an imaginary 1 gig port, OK? And this is the traffic profile on a very busy port over a one second period, and you can see that during those 50 millisecond time slices, that none of the traffic is actually exceeding the 1 gig so buffering will never happen in that sort of situation. But of course, reality isn't really like that, and quite often, you know, you might get on your sort of large numbers of ingress ports you might actually exceed the capacity of the egress port and when that happens, the excess data is temporarily dumped into a port buffer and what happens is that, in fact, it just kind of gets spread over a longer period and some of the data will be delayed, you will see higher latency and stuff like that. So it's important that you have enough buffer space to manage ?? to be able to take on that extra data.

OK. So just on your store and forward switch, you have your packet coming in on the ingress port, it goes into the green area which is the port buffer, and then once it's been fully received on the ingress port you can then start thinking about where it's going to go out, and if you get multiple packets in for the same egress port they all come into the same buffer and then the switch will make some sort of decision about what is the best way to do it. The cut through switch it's slightly different: As soon as you get the destination Mac address, you just send the packages straight through, it goes straight out but that means if you have got other packets coming in at the same time, which will quite often happen Working Group /PW*EURDZy traffic, they are going to get buffer temporarily, so you still need buffers but not as many.

Some observations on buffering: What I have said there was really, really, really simplest I can overview. It's not particularly applicable in fact to most of the modern switch architectures that are used today, in particular the broad come chip set uses micro cell architecture which is slightly different. Store and forward switches need much, much bigger buffers and as a result the cut through switches are usually built with smaller buffers. This does mean that in certain situations you will see more packet loss on heavily loaded egress ports. You can avoid a lot of these problems by implementing your step down from 10 gig to 1 gig on a sort of a core switch edge switch architecture and that is what we have done. The /TP*ES ex 64 use ?? their store and forward switches in the same way that say the 6,500s would have been but they use 36 megs port buffers shared between 24 ports, whereas the 1 gig cards on the 6,500 just have a static per port buffer of 1.3 megs. It's not a huge amount. In this respect the /TPEBGS exes are arguably quite a bit better. You can invent lab situations to show that each methodology will work in better in very specific cases. We are not interested in lab set?ups here; we are interested in generic sort of Internet exchange packet chaos. We suspect, although we don't have direct proof yet because we don't have high enough traffic levels, that the performance of the 6704 will be roughly equivalent to the new turbo irons that we have.

So briefly, in conclusion, will it work? It will certainly work for INEX. I did note it's not going to work for large IXPs and I am very happy to be proven wrong with Ukraine. I think what they are doing is ground?breaking and it's fascinating to see this is going to scale much bigger than we had originally thought. There are certain limitations. You have to do very, very aggressive monitoring of ports and also the internal switch fabric to make sure that if you have drops, you know exactly where they are happening and you know exactly when they are happening and if you can monitor them closely, you will actually ?? you should actually be able to work around them. If the drops are on your core links, you can upgrade your cores relatively easy. If they are on an edge, say, an eyeball provided, an access provider, then you just kick them until they upgrade.

Will it break? That is an interest question. We don't know if this model will break for the IXPs. It certainly won't break for INEX. We suspect that the big switch with the big buffers model will probably scale a lot further, but as I say here, we really look forward to having that sort of problem. So in general, it's critical if you are going to operate an exchange like this, and I know there are several exchange operators in here who are looking at this model, you have to understand buffering and queuing and the internal switch architecture and the second thing you have to do is you have to implement very, very extensive packet drop monitoring.

So there we go. That is all I have to say. Any questions?

CHAIR: Thank you, Nick.

(Applause)

I am not sure if we have got any suited T?shirts.

SERGI: I know how to resolve all your problems. You will never see any drops, you have got green equipment because, now, all your IX equipment powerless and ?? all equipment costs less than 100,000 euro because we use extreme, 24 ports, the cost less than 10,000 euro. It's very cheap, it's working, even fibre optic model cheaper and no any problem. We switch it off from Cisco three years ago and very happy.

NICK HILLIARD: I take your point, that in order to scale ??

SERGI: I am trying to help you.

NICK HILLIARD: It's very easy and very cost?effective to build 10 gig ?? extra ports where necessary and because you can offer a lower price to your members, that means when you are kicking them to upgrade they will be inclined to do it sooner rather than later. Thank you very much, that is interesting point.

CHAIR: OK. Any other questions? Right. Thank you very much, Nick.

(Applause)

Andy is going to do a quick minute and a half on the state of the switching wish list which we haven't really looked at for a while and then that will be us.

ANDY DAVISON: I thought it was about time we had a look at our wish list technical documents again so it was more in line with the needs of organisations who were buying switches for their Internet Exchanges today. I don't think it's been substantially reviewed for about five years, and when I looked at it when LONAP did it's most recent RFP I found there appeared to be descriptions of problems that will well solved now by the switch vendors and, at the same time, some of the things that we thought were our highest priorities were not covered in the document so I thought that it might be a good meeting to start discussing this again, but maybe, instead, we will start a conversation on the mailing list, which everybody who is in the room should subscribe to because news about the meetings are also posted about this. We will start a discussion on the mailing list and maybe bring it as an agenda item for 62 in Amsterdam, lunch has been so enjoyable this week we don't want to miss any opportunity to eat in Rome. So the message really is subscribe to the mailing list and we will have a conversation about that before 62. Thanks.

Mike: I was the originator, the original editor of it.

CHAIR: Introduce yourself.

Mike Hughes: No affiliation. I have been one of the original editors of the switch wish list, I completely support people getting on and changing it and updating T it's interesting, I was looking at it in the back with Remco, there is so many things in that list that we as IX operators and that take for granted in the switches now that we didn't have, so it proves the power of a document like that if we have use it had when we have been going out doing purchasing and showing it to the vendors, they do listen in the long?term and go and do these things. The other things is things we are asking for or things that were non?standard that some vendors were implementing and some not, that are now standardised, it needs refreshing so yes, let's get a discussion going.

Andy: I believe our A R guard is mentioned in the document and we still can't buy it so it's ?? it will be a good opportunity, maybe, to, as the community write to the vendors and say we have been asking for this for a while but now we are really asking for it, I don't know if that is a strong enough message. I think a part of that problem is, it's still held up in IETF. Anyway, this was the conversation that we will have in 62.

CHAIR: Well, thank you all for coming. As Andy says, please do subscribe to the mailing list, it's how we communicate amongst ourselves as a Working Group, relying on sending private mail to people just doesn't scale when we have got lots of presenters. When you are sending proposals could you send to the EIX Working Group chair's address and both of us can see it and either of us can deal with it, whoever sees it first. That caused quite a few delays this time. Thank you for coming. Anybody who is due a T?shirt, they are available at the front. Hopefully we will have have some in your size. See you at the meeting in May, that has caused some clashes for some people. See you soon.

LIVE CAPTIONING BY AOIFE DOWNES RPR

DOYLE COURT REPORTERS LTD, DUBLIN IRELAND.

WWW.DCR.IE