Search The Adilas.biz Developer's Notebook
Time Period:
Daily (enter the day you want to see)
Monthly
Custom Date Range to
Template Filter:
Color Code:
General Text Filter:

(use a plus "+" sign to separate search terms. ex: jack+jill+hill)
Sort Value:
 
Adilas.biz Developer's Notebook Report - 6/29/2023 to 6/29/2023 - (6)
Photos
Time Id Color Title/Caption Start Date   Notes
No po photos available. Click to view time details.
Shop 10291 Merge and deploy updates for SpringBig time zone issue 6/29/2023  

Merging and pushing up code with Eric. After the initial work, we spent some time talking about data modeling. Here are some of my notes.

- Eric would love to do some more data modeling and taking things into consideration and making a plan. He used to do this for other companies that he has worked with and for. Great resource. We could really use his help with adilas lite or fracture. This was like a mini database and data modeling lesson of sorts. I was loving it and scribbling down notes as quickly as I could. Fun stuff.

- We talked about flex grid tie-ins, flex attributes, and parent attributes. Basically, things that he sees that we do that might be built out into more efficient tools and features. Maybe rework some of this and/or combine some of the features.

- What really connects to other things (natural relationships) or what things are forced together (forced or special relationships)? We may want to look at use cases and try to pull out the natural relationships. Then build your application according to those natural relationships. You may still need to allow the forced or special relationships, but those become the edge cases vs the norm.

- If something happens over and over again, this should be part of the core system. Currently, we do use a lot of flex grid tie-ins to help with some of these special cases. As a side note, some of these one-off features are becoming more normal and should have their own logic and tables vs putting everything into the flex grid tie-ins. Great tool for getting things started but eventually, you may need to build out specific tables, logic, and pages. Make it more normalized and more efficient.

- As a note, what does the flex grid do? It allows for one-to-one connections, one-to-many connections, add log notes to anything, tying things together (main id's to sub id's or main id's to other main id's), and it also allows for up to 30 custom fields. Once again, it can be on a one-to-one basis or used and setup as a one-to-many relationship. Here is a help file that has more info on the flex grid tie-ins.

- As a note, the flex grid tie-ins have been the big brother to the things we are trying to build called flex attributes or real in-line database extensions or real in-line extensions for short. Here is a small, older graphic link, of what we are trying to do.

- We talked about the bus to motorcycle project (datasource project or world building project). We are headed to a new model where the corportion id numbers (corp_id) will be left out per database. Each company will have its own database and thus may not need the corp id number. This deals with table names, joins, and data that gets stored in the database.

- Back to the flex attributes and a possible option to build them right into the main entities or high level tables (for the 12 main players or wherever we see fit to put them). This option has some pros and cons. We'll have to work this out. Currently, I'm really leaning towards something similar to what we did for the current flex attributes or parent attributes. Let them build and setup any custom fields that they need. Dynamic relational model. Just for fun, here is the progression - flex grid tie-ins (2009), sub inventory attributes (2015), parent attributes (2016/2017), flex attributes (2020).

- Lots of talk about data modeling and being able to take off the corp_id. Including on the end of corp-specific tables - for example: invoices_53, invoice_payments_53, time_sub_inventory_53, and a slew of others.

- Maybe break the pili or po invoice line items into two different pieces. It was joined together to help with inventory counts over time and across multiple locations. Anyways, we may look at separating those tables into multiple pieces. Super important, make sure to remember and include locations. If just a single location, we could do the architecture differently. However, with multiple locations, it gets a little bit more complicated or tricky. There are tons of other possible options.

- The payee table should be broken up as well. Currently, if a person or entitiy is tied to an expense/receipt, a PO, an inventory item, it lives in the payee table. Payees consist of users, employees, vendors, and special customers that had to get paid out of the system (a copy and convert process). Anyways, we may want to break that table up into users, vendors, and special customers (something like that).

- We talked about a concept called "attribution" and data normalization levels. There are two main types of data models. You have the logical data model and the physical data model. Entities and entities have attributes. Eventually, those entities and attributes get translated into tables, columns, and fields in a database. Often, most attributes become their own database column or field.

- Attributes are different than types.

- We talked about fields like "flag_for_1099", "password", etc. Those are attributes for certain entities. However, does a vendor need a password field, most likely not. Each field or attribute needs to go with the entity that it belongs with. We, at adilas, tend to mix and blend some of the attributes between different entities. In some ways that is fine, but it requires explanations, instructions, and training. It's not as easy to follow without someone to guide you along. Anyways, some good conversations about data normalization stuff. What goes with what and why does it fit like that?

- Make the names readable and logical where possible. We do a pretty good job on that, but there is some randomness in there as well. Along with that, we jumped into talking about a section called special accounts. We are planning on using that for gift cards, loyalty points, in-store credit, vendor credits, punch cards, and other special account transactions where we almost need a bank account style with a rolling number and being able to add/subtract using individual transactions or actions. Anyways, we have a few fields in there called dev_flag_1, dev_flag_2, and dev_flag_3. We use those flexible fields to help with certain parts of the process. In a way, we didn't know what we were going to need, so we added in some flex fields. Well, now, those flex fields have rules and hold certain data that could be its own column or field. However, because we didn't know what would be needed, the fields are somewhat mixed, depending on what is stored there and what kind or type of transaction record is being stored (loyalty points vs gift cards or whatever).

- The conversion trickled over into human reference fields vs computer identifiers, ids, or computer reference fields. They are different and play different roles.

- As you think things out, eventually you have to transform or go through a transformation from logical models to physical models. Eric kept saying that we should be shooting for the third normal form (data modeling and database modeling). Figure out the whole business world (plan it out as best you can) and then build out what you need, based on what you see and/or know.

- We talked about aggregates and data warehousing. I mentioned that I would like to build out tables for yearly per location, quarterly per location, monthly per location, weekly per location, and daily per location. We would also have the underlying transactions or transactional database tables (raw data that holds all of the data). The other tables would be what we transform the transactions into (a form of aggregates or business intelligence).

- Along with aggregates, Eric was saying that sometimes you can watch the database and see what tables, queries, and reports cost the most (data, traffic, or processing time/energy/frequency). You then build out aggregates based on those findings and/or known needs. For us, we've been doing this for long enough, we know a few places that could really help with speed, server load, and provide great BI or business intelligence levels.

- Our system has to go clear out to the full accounting level. That changes how we do certain things. That is awesome! Our sort of end goal is perfect accounting, aggregates, per day, per location, and per category. Some of those (category levels) vary but they have mostly been defined in the current system. That is huge. We have a plan, we have a path. We just want to refine it. Eventually year over year reporting, monthly by month comparisons, real-time data - all data is live and searchable (adilas).

- Snapshots, aggregates, different preset and controlled data levels. We may need current data (tables without any dates - assumption of current counts, values, sums, totals, averages, maxes, mins, etc.) as well as dated or historical data (tables with dates to allow previous or prior lookups and date driven lookbacks).

- What about enterprise mappings and cross-corp stuff? We need to plan that out as well.

- We also need to consider servers, speed, reliability, backups, redundancies, and how deep we going?

- Lastly, Eric could help with a ground up data model. We could pick a topic, break it down, and do a number of smaller sessions vs a big push. That would be too much. Anyways, great meeting and Eric could be a great resource for planning, checking out our decisions, and planning out the best course of action. Good stuff!

 
No po photos available. Click to view time details.
Shop 10295 Code review 6/29/2023  

Code review on some of Bryan's files. Merged and pushed up files. Got an email from Cory, there was a small error. Light debugging and repushed up the code.

 
No po photos available. Click to view time details.
Shop 10296 General 6/29/2023  

Various different activities. Opened up the additional vehicle assignments per customer project. Started back on that project. Got an email from a 3rd party that needed help with some API socket endpoints. Switched gears to help them out. Spent about 45 minutes prepping things. Sent them a bunch of information, did some live testing of the API endpoints, and reached out to the contact with links, info, samples, and screenshots. Transitions between projects.

 
No po photos available. Click to view time details.
Shop 10181 Planning with Alan 6/29/2023  

Before Alan jumped on, I was working with my sister on some ideas for a simple store front or consignment type ecommerce site. She has a bunch of products (hand built crafts and stuff) that I would like to help her sell using the adilas marketplace (future project or future business idea - planning for fracture and adilas lite).

Meeting with Alan over the GoToMeeting session. We started out and I was reporting on a few entries from yesterday. We reviewed what I learned in the Adobe ColdFusion Training Event (# 10256). Went over the scanned notes that I took. We also talked about the virtual obstacle course that we would like to build to test some of our ideas and prototypes (# 10294). Fun little review.

Next, we switched over to what Alan has been working on and playing with. Lots of fun learning and prototyping. Alan spent some time playing with REST API's and GraphQL options. He was really excited about certain parts of the GraphQL stuff that he was learning. Limiting data that is sent back to the users and making things easier on the servers.

Here are a few of my notes from our meeting:

- Alan was playing with GraphQL, Node JS server, Apollo, Prima, and other tools and features.

- Lots of queries, mutations (things or code that alters the raw queries), resolvers, and subscriptions (observer/subscriber type models).

- Small demo on what Alan learned from playing with GraphQL. This included unique ids, JSON web tokens, tons of files and folders (dummy setup files to put the pieces in place and then you go in and build them out - sort of a prebuilt file/folder structure for your app).

- Looking into web sockets and options to push/pull data - going through those web socket options.

- More talk about API endpoints, automation of the documentation, layers, transactions, locks based on transactions (with full rollback if needed), and throttling API endpoints.

- Tons of other topics such as: NoSQL databases, document stores, relational databases, query caching, ORM models (either object-relational mapping or object-role model), properties, tables, access layers, database updates, etc.

- We talked about using REST API's and having our bigger pieces maybe even broken down into smaller pieces. For example, instead of just having a folder for invoices. We may want something like: invoices/single, invoices/multi, invoices/search, invoices/export, etc. Basically, sub sections within the main invoice player group. We have 12 main application player groups. Each one does stull individually, as a group or a whole, and other sub functions. Maybe think along those lines.

- Spent some time going over security options, JSON web tokens, allowing servers to change the data but the browsers can't change the data. Frontend security, backend security, Node JS, Adobe ColdFusion, and options for both sides of the fence.

- Good meeting with some good research and R&D stuff. Good job Alan!

 
No po photos available. Click to view time details.
Shop 10297 Meeting with Bryan 6/29/2023  

A couple different meetings with Bryan. We met and did a quick push, going over some bit bucket stuff, and looking at commits. Phone call and then back on the GoToMeeting. Doing some debugging. Commented out some code, did some testing, and re-pushed the code. Recording notes on work done today on ship A.

 
No po photos available. Click to view time details.
Shop 10298 Recording Notes 6/29/2023  

Recording notes from the day (6/29/23). Busy day, bouncing all over the place. Fun but long.