Basic Assignments
 
Options & Settings
 
Main Time Information
Color Code: Yellow
Assigned To: Brandon Moore
Created By: Brandon Moore
Created Date/Time: 11/29/2022 10:34 am
 
Action Status: Blank (new)
Show On The Web: Yes - (public)
Priority: 0
 
Time Id: 9650
Template/Type: Brandon Time
Title/Caption: Meeting with Wayne
Start Date/Time: 11/30/2022 10:00 am
End Date/Time: 11/30/2022 11:30 am
Main Status: Active

Sorry, no photos available for this element of time.


Notes:

Wayne and I jumped in and did an hour and a half long meeting talking about the internal shopping cart. Wayne was showing me how he has to change things in order to run tests on certain pages. He is using an interesting combo of saving content and running included files inside of the saved content. Then it becomes a memory variable (big but still in memory) and he can test certain parts of the page that way. Interesting.

We went over some of his tests and how he has to virtually fake it, mock it up, and simulate certain flow and/or actions. He is working on removing an older code set that used a cfinvoke tag into more of an object model. We went over mocking the data, throwing errors, and catching the errors and exceptions. The discussion on testing got us into topics like: limiting the scope, only testing one page at a time (not the dependencies), testing without touching the database, keeping your data clean, simulating page redirects, as well as hard aborts or stops.

Part of the test is separating the data from the logic. The logic doesn't know if the data is real or not. It just knows what to do with certain types and kinds of data. By splitting things up, you could test the code logic without having full access to the actual data (live data) or being able to fake or mock the data (hardcoded or simulated data). The goal is to test the logical flow of the data through its process and/or routine. Ideally, the end goal is to make the processes more refined, bullet proof, while keeping the data nice and clean, and organizing the code to be reused and maintained by others (where does each process live and how is it organized). We briefly talked about putting supporting code closer to the usage of that code vs a more disjointed style.

After going over some of the testing, we switched over and started looking at the actual cart code and how it works. We did some demos, looked at code, and did some drawings. Some of the discussion was dealing with other developers and what assumptions that they are making. If the process is super complicated, the other guys who don't know, guess at where to make changes and where to start. That has bitten us in the butt a few times. Often, we play this crazy game of "add on". If the other person/developer doesn't know where it starts, they simply add on where they can and then go from there. Sometimes that is fine but as it gets more complex, it actually causes pain to the other developers.

To be honest, sometimes we don't know where things are going when they get built out (first time around or phase one). It just keeps organically growing through that add on game. Eventually things slow down and we can determine what is needed and wanted but we may make a mess getting there. That is totally normal. You just have to be willing and able to go back and do some maintenance and clean-up. That's the part that we have been missing. Currently, we just keep pushing forward without taking time to circle back around. That process of circling back around takes major work sometimes. Part of our fracture project (future project) will be going back and defining what we want (overarching scope and requirements) and what we need and then building accordingly. The bigger the picture that we can see (speaking hypothetically) the more we know what needs to be included and fully integrated - once you know the bigger picture, the path becomes clearer and you can plan accordingly.

Things get really crazy when there is a difference between testing and production environments. That could be settings, permissions, versions, data, usage, processes, scale, etc. It can get quite in-depth. The more ways to do things, the more things that need to be checked and maintained.

Wayne was saying, none of our customers have ever left our system because we don't do custom. They love that. Some of them really hate that as well as they have been burned and feel like testers vs clients. Most clients don't have a problem with what we charge, for what they get, we are way under the normal market prices. The biggest reason that some of our biggest clients leave is either the look and feel or consistency and scaling issues (they get too big for our product). The next step up is a big one (cost of other custom software solutions). Wayne and I were talking about helping our code not be fragile and be more bomber and scalable. That is a task, in and of itself. One of our biggest weaknesses is code reliability. Getting that dialed in and on speed dial, that's the goal.

Having said that, we have a viable working prototype (the adilas system or adilas platform). Not only are we constantly refining it, we have had paying customers who have paid hundreds of thousands of dollars per corporation to use our system, as is. That's millions and millions of dollars that we have generated on a working prototype of sorts. That is awesome. I'm excited to see where it goes.

Finished off our meeting and conversation about talking about how even automated testing still needs some hands on testing and virtually kicking the tires. Wayne and I also talked briefly about percentage ownerships stuff inside of the adilas company. We would like to invite Wayne to be a co-owner in the Adilas, LLC - multi member LLC company.