>> BEC WHITE: Thank you. So, hi. Jason and I are technical folks, but we're going to take a half step out of our lane today to talk about how some of the technical and project decisions have impacted our larger organization.
So, my name is Bec White. I have been at Palantir.net for the past ten years. I'm the senior directory of delivery and collaboration, but my background is in engineering.
>> JASON PARTYKA: And I am Jason Partyka, I've been with Argonne for about eight years, working with Drupal for over ten now, and I have got a background in computer science and software engineering.
>> BEC WHITE: So, we've been working with Argonne for the past four years to transform the lab's public and internal web presence. And we're going to talk a little bit about these past four years today.
>> JASON PARTYKA: So, this is a picture here, an aerial picture of Argonne National Laboratory. That big circular thing at the front is a big X ray device known as the advanced photon source. And it is very that device is being heavily used in researching things about the coronavirus.
And Argonne National Laboratory, one of the partners with 17 national laboratories. We're located in the Chicago suburb of Lemont, Illinois, surf rounded by the Waterfall Glen Forest Preserve. There are other laboratories that are surrounded by forest preserves. Those are all fictional. As far as I know, we are the only one actually surrounded by a forest preserve.
We have around 13,000 people on site any particular day. There are 16 research divisions. You see that address there. Everybody else will host scientific user facilities. The APS is one of those as well as some of our super computers.
>> BEC WHITE: So, Palantir.net, we've been working with Argonne since 2015 on a couple different projects. We're a full service digital consultancy, and you can find us at Palantir.net.
>> JASON PARTYKA: Within Argonne, I work at the division known as BIS, Business Information Systems. We manage a whole host of digital services, including communications, which are our Drupal platforms, as well as other more traditional business functions such as HR, benefits training, conference planning, electrical inspections, as well as document management for policies and procedures, research and publications, and those are our big appliance activities.
But we also have a con of constraints that we need to deal with. We believe the principle of least access necessary to do your job. We have contracting location and compliance requirements for all of the software that we use. We also have extremely large and long application lifetimes. A lot of this information and services is actually managed by different departments. A lot of times we strictly customize and make sure that these applications are running.
>> BEC WHITE: So there are some really specific constraints to, like, content publishers requiring elevated permissions to establish content or records retention rules around any content published in a broadcast medium like the website.
There's also taxonomy that's used across the organization, so it's developed by the library department, but it's used for web content as well. There's privacy stuff because of pulling information from HR systems into Drupal. And then, Jason, you mentioned the biggest one is really application lifetime. This stuff stays live for 11 or 12 years.
>> JASON PARTYKA: Or even longer. We have discovered some implications that were written about 20 to 25 years ago. Some of the very first web facing applications.
>> BEC WHITE: So we wanted to talk about some of the services that specifically use Drupal and a little bit of Word Press, though we're moving away from that. Drupal is used for internal and external stuff, primarily public websites and intranet. The content on these sites serves multiple audiences. We mentioned that there are 13,000 staff collaborators and contractors, so the internal audience is a pretty significant part of communications strategy.
So, back in March of 2017, we started web development of the web project. This project was planned to accomplish every desired feature you can think of. It involved the public website, redesigning it, implementing single sign on, managing user accounts, permissions outside of Drupal, integrating with the publications management system, integrating with researcher profiles based on internal person data, and internal taxonomy project.
Replacing a Drupal 6 based intranet, integrating with a ticketing system, integrating with a benefits system, syndicating news and events from the intranet to the public website, so finding that kind of internal and external communications strategy.
So, this new technology stack, we fully expect it to be in use for the next five to ten years, which thank goodness for semantic versioning in Drupal 8.
This development was organized into sprints, but the scope was pretty much laid out at the beginning of the project. Everything was on the table. We launched the site in August 2018. This was 11 months later than the original timeline and launched with about 60% of the original scope. Just the public site, not the intranet, not the content syndication.
This site had a beautiful design. It had a lot of the integrations that we talked about. So the single sign on, permissions management, publications integration, profiles integration. There's a robust testing suite, and it included a reusable platform for future work. So, our future work is future work, now current work is based on this platform that was launched.
So, it was a successful launch.
However, as you saw, we're a few challenges. There was a launch cliff. So, what the heck happened to the other 40% of the planned functionality? Where did it go? When do we get it? That's all stuff that we need. It wasn't just on the timeline because we liked it.
And then we maintain this. We now have a platform that we're moving forward with.
So, the next challenge was daily complexity for content managers and end users. This isn't something that we added with the public site, with that site launch. This is something that was just ever present. But we did launch without the intranet, and content managers were adding news and events in two separate locations for internal and external audiences.
A bunch of digital services are still there. There are a broad set of needs, so those set of digital services are going to be there.
And we got user feedback like, oh, congratulations on the launch, but I'm really frustrated. I'm frustrated with when it launched, I'm frustrated with the content model. I don't see what I hoped I would see there, or I'm still using the old intranet and it's really annoying.
And finally, our third challenge extended timeline and reduced scope. We planned a six month project and we got a 17 month project. And the original feature set didn't survive. We had to pivot from our original goals.
So our latest project, we launched the intranet site. We started in May, last May, and we finished last December. So we planned it to launch in October. So we planned a six month project. We completed it in eight months. We got 95% of our original feature set. And we've really focused on reducing complexity for users. And at the end of it, we literally got tickets in the ticketing system that said, this is awesome. Like, they didn't have a bug to submit. They just wanted to reach out and say they were happy.
So, that was very exciting for us.
How did we do it? What did we change? We built a well tested web platform. So we really laid the foundation for this in the first project. The first project took longer, but it included testing and reusable integrations that provided the basis for this next project.
And we also created feedback cycles and transparency that allowed us to hit the target with the futures that we developed.
So, I want to hand it off to Jason to talk about how building the platform really transformed the way we work with the web at Argonne.
>> JASON PARTYKA: All right. I'm going to take over the screen here, so just give me a second. All right. So, everybody can see my screen now?
>> BEC WHITE: Yes.
>> JASON PARTYKA: All right, great. So, how building a platform transformed the web and how we develop on it at Argonne National Laboratory.
So, one of the things that was discovered and determined to be a priority early on in the project was to improve and really implement a CICD testing capability. These were recognized by management. However, we really didn't have much of a basis to start from.
And a big reason why this was accepted as a priority is the recognition that we are going to be having multiple websites on one Drupal installation. So this will allow us to create this complex and highly integrated solution that we're spending a lot of time and effort and budget on.
So, a quick segue here for a definition of CI/CD, in case these terms are new to you. And I'll read the first bullet point because I think it's the most interesting part here. It is the practice of merging all the developers' working copies to a shared mainline several times a day. And continuous delivery is about creating short cycles and delivering on those short cycles.
So, we are now at a point where we are effectively having one release with every sprint. And this is a relatively new thing that we have done, but so far, it has been working well and we're able to create new features and roll them out in short times.
So, something here about CI and CD. This is something that is brand new to Argonne's IT department, or BIS. I'm a big sci fi fan. So, my analogy here is that we have kind of figured out how to go at light speed, and you know, these tools here that I have listed, CircleCI, GitLab, TravisCI, even Jenkins are part of these any of these tools here today will allow you to do that.
But the other thing to have communicated internally is that we're still figuring out how to do and work well with this amazing capability. So, I kind of say now we're figuring out how to actually go well beyond light speed.
And the other thing here is we wanted to reduce the amount of drama within the project. So I've got a lovely drawing of a llama that was made by my daughter completely unprompted this week because she was bored with the shutdown. But our phrase was llama llama, month drama. This was almost our motto for the project. The CICD portion of it really helped minimize the amount of drama that we had in this project.
And why did we feel that this was necessary? As we mentioned earlier, our lifetime for this application is really long and could potentially be longer now that Drupal is on a semantic versioning paradigm. We just heard off CBS a couple of years ago and that took us a long time to do that, even though people really wanted to get off of it.
Our document managing system has been around for over 11 years now. The next big reason is that developers come and go. Especially when bringing on other people from within BIS. Not a lot of them know about Drupal. Some developers have been coding since the '80s. And some of those use the business requirements, they're much more concrete, think about things like accounting or HR work. The work flow is very prescribed, very laid out, and not as, you know, apt to change as it is in a marketing or a public relations context.
And finally, we're also still moving the environment. As we mentioned earlier, we have a lot of regulations, policies, age procedures that we need to adhere to. Our authorization authentication requirements are policies. We didn't want to implement these just because we wanted to be difficult. They are things that we are mandated to do as well as retention of requirements. There are certain retention requirements that go around.
All right. So where did we start from? Well, we started from almost zero. We were running Jenkins internally in sort of a shadow IT way. By that, I mean our development team was responsible for maintaining our Jenkins instance. We had to update the jars when necessary. We had to apply the OS patches when necessary. That was a huge drain on our time and resources and we could never really develop that solution. This was prior to the start of this project, what we were using before.
But it made a very, very difficult it to really use it. We never used it much beyond a super cron or scripter capability. We would deploy out to the server, apply some updates, but that would be it, and then also running Drupal Con on a regular basis. We didn't have much in the way of automated tests. We had no way to know whether or not the deployment would work before actually putting it onto production.
And the other critical thing was these bill scripts were disrupted from submersion. We kept track of it within Jenkins. That was a big pain point. We had to make sure we constantly updated that.
We really did get a big assist from Palantir. They helped us kick start this delivery process. Moved it into git. We were able to leverage all the hooks and tools that git and GitLab allow you to build from. We established a build process. Also, we established a real but limited testing framework. All of these are new capabilities.
So what did we start with? Pretty much the basic. Does the site install. It's a very basic test.
What do we have now? As we alluded to earlier, we are talking about a public website. We also added our guest house website to this platform. So we've got three websites, one code based, and they really all have completely different business cases. We have a highly customized public website, highly customized internal site, and effectively, a [indiscernible] site. Our Behat tests have grown. There's 550 scenarios, approaching 8,000 testing steps. We're also testing clean installs and production updates. So when we test production updates, we're grabbing the database from production, bringing it down to within the containers and running our test scripts and making sure we can apply the update scripts, and we're able to apply the exact same update scripts that are running on production, because it's all kept track of within git.
Now, this is a really awesome part here. Our first major win had to do with this security update that happened about a year ago now, and had to do with metatag module. We ended up inadvertently using a bug as a feature. When we applied the update, it broke. We were able to fix it within a couple of hours.
Now, the really, really interesting thing about why this was a huge win for us is that, as you may recall, at the end of 2018, Google was prohibiting its end users from using a Google Search appliance after the end of the year.
So because of the delays, we were then scrambling to get a replacement. We ended up using Covayo, but that's sort of besides the point here. The point is that we were using the metatag module to provide some extra context about the type of searching that we wanted to provide. And suns we ended up inadvertently using a bug as a feature, it would have then broken if we applied this update without the test.
And all of a sudden, the search engine that we just rolled out, this brand new one, and we're advertising all of these big major improvements, would stop having relevant content for people that were trying to search.
So, we caught this before it became an issue, and we were able to know exactly what the issue was, and we fixed it within a couple of hours, and we were back on our way.
So what did this buy us? It bought us a huge amount of confidence. The everybody on the team is recognizing the testing is supported. Management recognized it. And they even recognized continuing work going down this platform of automating our deployments, automating our testing, and just automating as much is possible about our platform.
But we actually kind of have a new problem, which I think is a good problem and a good place to be in. And that is testing is hidden, even though we're telling our end users that this is going on. It's still hidden from them. So most of the problems are solved before they even get to the end user. Sometimes they don't the amount of testing that is happening is not present at the front, so they're like, well, how much testing is actually happening? At least now we can go back and point to this.
But, you know, we point to it and then we have to remind them again in a few months, but it's a good place to be in.
And then we're also going to set an example for the rest of the IS. So, you know, while they're not going to be able to take all the tools that we used, a lot of it will be applicable. There will be a lot for them to learn.
All right. So onto the second step of our presentation here, and we're going to talk about how the integration approach transformed the intranet experience. Bec's going to talk about this slide and I'm going to pick up and talk a little bit more about our integrations.
>> BEC WHITE: So, one of the things that we find within such a large organization is that users face a lot of complexity, just in accomplishing daily tasks. They have to gather information across a whole bunch of different systems.
By integrating some of those systems into the intranet, we were able to reduce the complexity for users. So we shifted that complexity from the user experience broadly across the organization into the technical stack.
So, in most cases, we didn't replace those separate systems. The HR system still needs to do its HR thing. The intranet is not going to do that. The intranet is not replacing the training system. But, the key information that people need from those systems is integrated into the intranet. So for their daily stuff, they don't have to go out there. They're pathways into those other systems when they need to actually interact with that system.
So, this is an ongoing consideration in how we prioritize this new intranet features, and overall, it's part of a larger initiative called improving how we work within Argonne.
>> JASON PARTYKA: All right. So, we have four broad buckets of integrations. And they're authentication and authorization. Personal information. Some of your PTO and training information. And by train, I mean things about compliance or process issues. And network status.
So, the first one, which is our authorization and authentication integration is our single sign on integration. This is just about who you are and are you authorized to actually get onto this platform there. Is our focal point for personalization.
This is how we get all of the information to tell our sites who you are. And we take that information that is given to us by authorization and authentication integration, and we use it to retrieve a lot of your personal information from our HR system. We are actually using the migrate module to use this on a recurring basis, not a web service call. And a big reason for that is because this data is so big throughout the site, so I didn't want to introduce that network dependency. We wanted to have it cashed within Drupal. This also provides metadata to our search provider. You have those meta tags. It also provides research, so not every user has created a profile, but if you're curious, there is the url right on our website where you can see a good number of researchers have used that as the basis for their profiles.
The next integration, and this is one of the things Bec was talking about how we are trying to pull information from a lot of the other systems and put them into My Argonne to make it be our focal point, is about PTO and training.
So, we have put the most commonly used information into one place. Once again, we're not replacing the HR system. We are just elevating that information that you need on a daily basis. And the things that a lot of people are asking for is how much vacation time do I have, what is my sick leave balance, what's in the training that I need to do. It's a big compliance thing, if an employee falls out of compliance with training requirements, that puts us with complaints.
Also, a critical thing is about network status. So, since a lot of people are telecommuting right now, even a lot of Argonne, some of our services are actually only available within our VPN or via Citrix or whatever.
But a regular end user is not necessarily going to know that. So we're able to determine how you're actually connecting to our websites via this integration and we can help customize information to provide a better user experience.
And Bec is now going to talk about how Agile transformed the way we work.
>> BEC WHITE: And I'm going to steal the screen back from you, if that's okay.
>> JASON PARTYKA: Perfectly fine.
>> BEC WHITE: Cool. Yeah, it's funny how some of those technical integration actually become a user experience point.
So when I'm talking about agile, I'm going to talk about small A agile. There are all sorts of places you can go for those big A Agile practices.
But what I'm talking about is using a phased approach to reduce the time to launch, and allow like safely deferring features that provides a safe feature for other features. Argonne had a change management process that was pretty cool, and then getting the full team in the room beyond the developers meant that we could hit the target for futures with the first time around. When we started, this was the scope grid we got. The red is the fundamentals. The yellow is the first rollout. The green is the second rollout. But it was a lot of things across a broad set of categories. And the kind of things that maybe we don't know how they're going to work properly, because the system fully the system we're integrating with isn't fully in place yet, or users aren't trained to enter content yet because you probably can't read this, I don't expect you to, but the far right column is content, and that meant that a big part of each phase was content development.
Our phased approach our December 2019 launch was rollout 1. So it included features that used existing APIs. We tried to reduce systems complexity for users and improved the user experience across applications, and we also wanted to unblock content work, because content was a piece of each of those rollouts.
We postponed features that were integrating with other systems, or depended on certain types of buy in. So we were able to reuse code for single sign on, for profile integration, all from the public site launch. There were a couple internal APIs that we integrated with. But any changes to those APIs were limited to adding a few data points. So, if a whole new data source was required within the API, that feature wasn't part of rollout 1.
We did things like removing the VPN requirement for users to access the intranet, which is actually very relevant right now, because the VPN is maxed out with all the work from home.
We replaced the site entirely, so we replaced the old news and events site and we paved the way to replace the old intranet site. So trying to reduce the number of sites that people have to interact with.
And the reason the old intranet isn't deprecated is because you have to manually migrate a lot of that out of date content.
And then we have a short term roadmap for replacing additional systems. But, again, that's part of rollout 2.
We used a couple criteria to figure out when we should prioritize or defer features. One of the biggest ones is if the technology is in transition. So, another department or our department is planning to replace a system, but we haven't started yet.
So one of the critical integrations for the intranet is the document center. So, integrating with the document management system. That transition hasn't started yet. So we can't put that in rollout 1. It would definitely delay our launch.
There are some that are more fuzzy than that, so if developers are working on an API but we can't consume it yet, even if their deadline is within our rollout 1 timeline, we don't want to depend on that. So we use the first rollout to the first phase to coordinate and make sure they're building the right thing and we have what we need, the information we need.
And then we adopt that API in phase two.
The other aspect of integration is people. So, sometimes the people just aren't ready. They'll have skepticism or lack of confidence in these systems, invested in their existing ways. By having an earlier phase or rollout, we can establish confidence and buy ins. So once people see that we can successfully launch a set of features and they start using that on the daily basis, they'll be ready for things like user generated content. If we had launched user generated content in the first rollout, people wouldn't have known how to participate, where to look for it. They just weren't ready.
The other thing about people can be that they need training. So, earlier phases can provide functionality to train on, and then the content comes later.
So, in rollout 1, we provided a lot of content tools, but we didn't have some of those content milestones actually in there because that depends on getting more editors in to do it.
So, that content actually comes days later than providing that functionality.
We break down our work into actionable pieces. It's really helpful to have workshops for each feature where we can kind of set boundaries and talk about the full scope of the feature. Then what I do is I throw a bucket of cold water on the team and I say, if you could only have one part of this feature, what would it be? Jason can attest to the shock. But I need it all! I need everything!
The thing is, if we identify the smallest possible number of stories, the smallest piece, the minimum viable product, it gives us a lot of flexibility to determine our priorities. So the ideal case is that our MVP is smaller than our timeline, so that we have some room to add in the things that become important as we see that feature set evolve.
I mention change management. So there is actually a point person within the Argonne organization, Angela Kenyatta, who coordinated communication with end users and stakeholders. She organized town halls to communicate with them, and it really helped people understand what was coming and when it was coming. Which smoothed adoption. It wasn't a surprise, it wasn't thrown over the fence at people.
So, finally, having the full team in the room when we're making decisions was incredibly helpful in having enough information. So the full team for us means the product owner, designer, the developers, and the content manager.
When we work as peers to understand the system from end to end, to understand both the technical constraints and the content needs, we can describe the feature correctly the first time. We get our feedback as early as possible. So, we don't just get feedback after lunch. We get feedback after each sprint. We get feedback even earlier during definition.
And for me, the content manager is really a critical piece of that because they know how the site will be used. They're the most thorough, informed, and empowered users of your system. So they're going to tell you what they need to say to their audience, what type of content they need to enter. And if your site doesn't support that, now you have the chance to fix it.
So if you're asking us how to do this, which you might be curious since you're here, we wanted to pull out a couple tips. Adding a test suite from the beginning, that lets us carry it all the way through. It keeps us stable for years. Using reusable code allows us to do the same complex integrations over and over. Our profile integration, implementing that on the intranet after we had already done it for the public site was a snap.
But building and consuming internal APIs, we can integrate those systems, we can surface the information where it is most useful to users.
We planned work in phases which means that future phases, we know they're there, they're on the timeline, we can move work onto them with confidence because we know it's going to happen, but we can also adjust as the information adjusts. So we don't have to pivot our whole project, we can shift the phases as we get new information.
And then we get feedback early and often. We make sure that our project is in front of people and that people who are using the project are having direct input into what's going on.
So, a couple hints that you're doing it right. Your applications have a long life cycle and you have automated testing. Make sure your test sometimes fail, because if they're not failing, you're not getting useful information out of them. They might not even be running, if they're not failing. Who knows?
If you write a custom module to integrate with an internal system, hopefully you're using it more than once. It definitely makes sense to write custom code if you to integrate with an internal system if you're using it once.
But if you're using it more than once, you're really leveraging that investment.
When your project has extensive dependencies on other software and other teams, you have to be able to move forward if those things aren't moving forward. So, we look at what's the most impactful thing that's unblocked right now.
So, you want to see critical features, features that your product owners are really chasing after. You want to see them in phase two. If they have a different timeline than your project.
You want to see big goals spaced out along your timeline. This is a favorite up with for me. You should have a content manager involved in your future definition and sprint reviews. And your product owner and developers should be referring to that person as your local expert. You should be getting feedback on your work. Every single sprint. Some of it should be critical. The earlier you get that critical feedback, the earlier you can address it. You're not going to get everything right in the first sprint. You're going to have to come back and revise. And it's a lot easier to come back and revise before launch than after launch when you're getting bombarded in your ticketing system.
And finally, if your technical leadership is talking about automation, they should also be funding it. So, they should know what it brings to your organization, and they should acknowledge that with dollars.
Thanks! I think we might have a couple minutes for questions. I can't actually see my clock.
>> JASON PARTYKA: Looks like we have about 15 minutes still.
>> BEC WHITE: Cool.
>> ANDREW OLSON: We do. If you want to put your questions in the chat, I can read them off.
>> BEC WHITE: I did put a bunch of links in the chat that were in our slides, because I know you can't click them when you're watching.
>> ANDREW OLSON: Another thing, too, if you visit the session page, you definitely want to rate the session, but great, we have a question from Rod.
Where there are a lot of custom modules to connect were sorry, I apologize were there a lot of custom modules to connect to the internal APIs of other systems?
>> JASON PARTYKA: So, yes. There were. We actually took a bit of a two pronged approach to it. And we split up the to make this integration within PHP, and what is actually necessary to make this integration within Drupal. And we did that so that this all these integrations could be usable by any other PHP application that may need to.
And then we took that library, which we consumed with composer and wrote a Drupal module to integrate with that. I mean, depending upon what the business case was, sometimes we did a dedicated module, or other times we stuck it in where necessary. They were definitely we took a module approach and used it where appropriate.
>> BEC WHITE: I think the main ones were we the profile integration. The taxonomy integration, because there's an internal taxonomy API. The authorization system integration.
>> JASON PARTYKA: Yes. As well as training and PTO.
>> BEC WHITE: Yes.
>> JASON PARTYKA: So all of those were developed as independent modules.
>> BEC WHITE: And the network status, so the IP range thing. So there's like an admin section of all of the sites are only available if you're on the Argonne network, which is it now supports IPV 6, which means there's a very large number of addresses, so that's provided by a web service.
>> JASON PARTYKA: Yes.
>> BEC WHITE: Did we consider render caching for your profile information instead of importing it via migrate? Or were there challenges worth mentioning?
>> JASON PARTYKA: So, I don't think we considered that at all. We were more focusing on minimizing the any sort of blocking networking traffic.
But the other thing was some of this data, we wanted to be able to integrate with views and other first class Drupal operations.
>> BEC WHITE: And that profile data is actually used in a couple different ways through the site. So there's profiles have two nodes. One is the data, one is the editable content node, and that data node is used to determine what organization you're a part of, a couple different things.
Next question. How do you manage ongoing maintenance with the subsequent phases post MVP work?
>> JASON PARTYKA: Well, I mean, we just bring them into our sprint cycles. Sometimes that means that other features get bumped, that are for the, you know, subscript phases. For the most part, I just sort of it mostly ends up being to the next sprint.
>> BEC WHITE: We actually have a pie chart about how we're spending our time on within each sprint. So it's divided up into sections, maintenance, new features. I think there's bugs and user feedback.
So we tried to put about 40 60% of each sprint of the development effort on each sprint towards new feature development, and about 20% to bugs and maintenance.
Was there an example of when automation seemed like a good idea first, but was judged too expensive, too inflexible, or otherwise undesirable? How do you make those decisions and frame those conversations?
>> JASON PARTYKA: Really, the basis is what can we do with our current technology. So, one thing that I did not mention is that our current CI/CD solution can only really do basic text inspection. We don't have a JavaScript browser that we can execute against.
And so this is actually one of the improvements that we're working on right now. But until that is ready to go, certain like being able to test certain things cannot work. So, for example, Covao has a they call it a JSUI framework, but it's essentially just like the entire interface is rendered in JavaScript in the browser.
So that is something that we have not been able to test yet. But, hopefully, once we get this browser capability to our testing stack, we're going to be able to use it.
So, basically, if it's within the existing capabilities or is only minimal, like maybe an extra 5 10% of effort, to do this automated testing, then we'll do it. If it exceeds that, we simply know it within our system, and depending upon what is actually being quoted, we may put up a future story in the backlog to get back to it.
So, obviously, that kind of gets prioritized very low, because futures have already been developed, there's not always a lot of value, and going back to testing, something you've already written and that's already out there because you really want to be testing for going forward.
But, it at least keeps it on our radar if we deem it to be important enough and appropriate to revisit.
>> BEC WHITE: The other big example of that that I would bring up is the public site launch. We had originally planned a broad set of features, basically the public site, the intranet, and either content syndication from the intranet to the public site.
So, that was an example of automation, like, syndicating content from the intranet to the public site automatically. And one of the things that we found is that there were a lot of requirements around those two sites having matching configuration, and the level of coordination between those two sites was too much for that first launch.
And so the pivot kind of took out the syndication and took out the intranet, and so there was that manual process in place around entering news and events both on the public site and on the internal news site.
So we revisited that in rollout 2, but or sorry, in the intranet launch. But that was definitely something that was too expensive and too inflexible the first time around.
>> JASON PARTYKA: So from Kris here. Behat versus Drupal's test mechanism. I know you can do JS testing with Drupal, reasons you picked one versus the other?
So, we already had a little bit of familiarity with Behat internally. It was very passive, very basic. But the when we originally started this, we didn't have any CR/CD testing capability. For a whole slew of project reasons, it just wasn't going to happen in any sort of reasonable amount of time. So we kind of basically put JavaScript testing to the side. Behat provided us with all the capability to test the dom and interact with the very basic text based browser.
It really bought us a lot. So, that was the reason we started with that. But as I mentioned just a few minutes ago, there were some big limitations with that, and so we're working on addressing that and we're going to start seeing more tests actually written with Drupal's native testing capability. For the entire life of the sites, I don't think we're ever going to move away from Behat. It's too baked in right now.
But we'll have more tools and we'll be able to use the more appropriate tools at the right spots going forward. Yes, exactly.
>> BEC WHITE: Well, and that's kind of the nature of this application life span is that we're not going to the hat is going to stay for the next period of time, but we can layer in as new technologies and we have kind of the platform to add those layers.
>> JASON PARTYKA: Any other questions? Okay.
>> BEC WHITE: How large is our dev team? Were all the developers internal to the Argonne team, or did you use developers from the Palantir.net team?
>> JASON PARTYKA: Ooh. So, there's both. Both Argonne and Palantir. What do you think the core team was, Bec? The core team has got about four, maybe four and a half devs. On Argonne's side.
>> BEC WHITE: On Argonne's side, four and a half or three and a half? And on the Palantir side, when we're involved in particular phases, and I think we had three developers involved over the summer, like during the height of the intranet development, and right now, we have two.
>> JASON PARTYKA: And we also brought in other resources to code these integrations. A lot of these integrations were brand new, and so these other developers had to expose inputs for us to integrate with.
>> BEC WHITE: So the bare bones development team is three developers at Argonne, who are also working on other projects. And the maximum goes up to like 4.5 Argonne developers plus like three full time Palantir developers. So we definitely scale depending on the phase. Yeah, when we're full throttle. Although, we've been figuring out how to move kind of focus our throttle a little bit since the we launched the rollout 1 and we went right into rollout 2 of the intranet. So we just continued development. We didn't stop for that launch at all.
Did we use Acquia cloud site factory and Acquia content hub? We did not use site factory. We are running multisites on Acquia. And we did use content hub. We tried to use content hub in the first phase, and as I mentioned, it got cut and deferred. We implemented content hub for the second phase. But the rollout didn't complete with the rest of the feature. So that's the 5% of the feature that didn't make it as part of our latest launch. But content hub is implemented and kind of mid deployment.
>> JASON PARTYKA: And as Bec said, we're not using site factory, but sort of one of the critical things that you need for site factory about not having site specific modules, or at least not modules within your sites directory. We decided to stick with that as sort of a technical principle.
So it's made management of the sites a little bit easier perhaps.
But other than that, I mean, we evaluated it, but we're not using it.
>> BEC WHITE: I mean, there's enough customization for each of the sites that developer side customization, that it's not just clicking a button and spinning up a new site and adding content. There's feature development around each of these separate multisites.
>> ANDREW OLSON: Great. Looks like we're at time. I'm going to go ahead and stop the recording. But before I do that, great session and thanks so much.
>> BEC WHITE: Thank you. Thank you all for all the questions. We really appreciate it.